SlideShare a Scribd company logo
3
Most read
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep Learning
Ian Goodfellow
Yoshua Bengio
Aaron Courville
Contents
Website vii
Acknowledgments viii
Notation xi
1 Introduction 1
1.1 Who Should Read This Book? . . . . . . . . . . . . . . . . . . . . 8
1.2 Historical Trends in Deep Learning . . . . . . . . . . . . . . . . . 11
I Applied Math and Machine Learning Basics 29
2 Linear Algebra 31
2.1 Scalars, Vectors, Matrices and Tensors . . . . . . . . . . . . . . . 31
2.2 Multiplying Matrices and Vectors . . . . . . . . . . . . . . . . . . 34
2.3 Identity and Inverse Matrices . . . . . . . . . . . . . . . . . . . . 36
2.4 Linear Dependence and Span . . . . . . . . . . . . . . . . . . . . 37
2.5 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.6 Special Kinds of Matrices and Vectors . . . . . . . . . . . . . . . 40
2.7 Eigendecomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.8 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 44
2.9 The Moore-Penrose Pseudoinverse . . . . . . . . . . . . . . . . . . 45
2.10 The Trace Operator . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.11 The Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.12 Example: Principal Components Analysis . . . . . . . . . . . . . 48
3 Probability and Information Theory 53
3.1 Why Probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
i
CONTENTS
3.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.3 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . 56
3.4 Marginal Probability . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.5 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . 59
3.6 The Chain Rule of Conditional Probabilities . . . . . . . . . . . . 59
3.7 Independence and Conditional Independence . . . . . . . . . . . . 60
3.8 Expectation, Variance and Covariance . . . . . . . . . . . . . . . 60
3.9 Common Probability Distributions . . . . . . . . . . . . . . . . . 62
3.10 Useful Properties of Common Functions . . . . . . . . . . . . . . 67
3.11 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.12 Technical Details of Continuous Variables . . . . . . . . . . . . . 71
3.13 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.14 Structured Probabilistic Models . . . . . . . . . . . . . . . . . . . 75
4 Numerical Computation 80
4.1 Overflow and Underflow . . . . . . . . . . . . . . . . . . . . . . . 80
4.2 Poor Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 Gradient-Based Optimization . . . . . . . . . . . . . . . . . . . . 82
4.4 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . 93
4.5 Example: Linear Least Squares . . . . . . . . . . . . . . . . . . . 96
5 Machine Learning Basics 98
5.1 Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.2 Capacity, Overfitting and Underfitting . . . . . . . . . . . . . . . 110
5.3 Hyperparameters and Validation Sets . . . . . . . . . . . . . . . . 120
5.4 Estimators, Bias and Variance . . . . . . . . . . . . . . . . . . . . 122
5.5 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . 131
5.6 Bayesian Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.7 Supervised Learning Algorithms . . . . . . . . . . . . . . . . . . . 140
5.8 Unsupervised Learning Algorithms . . . . . . . . . . . . . . . . . 146
5.9 Stochastic Gradient Descent . . . . . . . . . . . . . . . . . . . . . 151
5.10 Building a Machine Learning Algorithm . . . . . . . . . . . . . . 153
5.11 Challenges Motivating Deep Learning . . . . . . . . . . . . . . . . 155
II Deep Networks: Modern Practices 166
6 Deep Feedforward Networks 168
6.1 Example: Learning XOR . . . . . . . . . . . . . . . . . . . . . . . 171
6.2 Gradient-Based Learning . . . . . . . . . . . . . . . . . . . . . . . 177
ii
CONTENTS
6.3 Hidden Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.4 Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.5 Back-Propagation and Other Differentiation Algorithms . . . . . 204
6.6 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
7 Regularization for Deep Learning 228
7.1 Parameter Norm Penalties . . . . . . . . . . . . . . . . . . . . . . 230
7.2 Norm Penalties as Constrained Optimization . . . . . . . . . . . . 237
7.3 Regularization and Under-Constrained Problems . . . . . . . . . 239
7.4 Dataset Augmentation . . . . . . . . . . . . . . . . . . . . . . . . 240
7.5 Noise Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
7.6 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . 243
7.7 Multi-Task Learning . . . . . . . . . . . . . . . . . . . . . . . . . 244
7.8 Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
7.9 Parameter Tying and Parameter Sharing . . . . . . . . . . . . . . 253
7.10 Sparse Representations . . . . . . . . . . . . . . . . . . . . . . . . 254
7.11 Bagging and Other Ensemble Methods . . . . . . . . . . . . . . . 256
7.12 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
7.13 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . 268
7.14 Tangent Distance, Tangent Prop, and Manifold Tangent Classifier 270
8 Optimization for Training Deep Models 274
8.1 How Learning Differs from Pure Optimization . . . . . . . . . . . 275
8.2 Challenges in Neural Network Optimization . . . . . . . . . . . . 282
8.3 Basic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
8.4 Parameter Initialization Strategies . . . . . . . . . . . . . . . . . 301
8.5 Algorithms with Adaptive Learning Rates . . . . . . . . . . . . . 306
8.6 Approximate Second-Order Methods . . . . . . . . . . . . . . . . 310
8.7 Optimization Strategies and Meta-Algorithms . . . . . . . . . . . 317
9 Convolutional Networks 330
9.1 The Convolution Operation . . . . . . . . . . . . . . . . . . . . . 331
9.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
9.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
9.4 Convolution and Pooling as an Infinitely Strong Prior . . . . . . . 345
9.5 Variants of the Basic Convolution Function . . . . . . . . . . . . 347
9.6 Structured Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . 358
9.7 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
9.8 Efficient Convolution Algorithms . . . . . . . . . . . . . . . . . . 362
9.9 Random or Unsupervised Features . . . . . . . . . . . . . . . . . 363
iii
CONTENTS
9.10 The Neuroscientific Basis for Convolutional Networks . . . . . . . 364
9.11 Convolutional Networks and the History of Deep Learning . . . . 371
10 Sequence Modeling: Recurrent and Recursive Nets 373
10.1 Unfolding Computational Graphs . . . . . . . . . . . . . . . . . . 375
10.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . 378
10.3 Bidirectional RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 394
10.4 Encoder-Decoder Sequence-to-Sequence Architectures . . . . . . . 396
10.5 Deep Recurrent Networks . . . . . . . . . . . . . . . . . . . . . . 398
10.6 Recursive Neural Networks . . . . . . . . . . . . . . . . . . . . . . 400
10.7 The Challenge of Long-Term Dependencies . . . . . . . . . . . . . 401
10.8 Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . 404
10.9 Leaky Units and Other Strategies for Multiple Time Scales . . . . 406
10.10 The Long Short-Term Memory and Other Gated RNNs . . . . . . 408
10.11 Optimization for Long-Term Dependencies . . . . . . . . . . . . . 413
10.12 Explicit Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11 Practical Methodology 421
11.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 422
11.2 Default Baseline Models . . . . . . . . . . . . . . . . . . . . . . . 425
11.3 Determining Whether to Gather More Data . . . . . . . . . . . . 426
11.4 Selecting Hyperparameters . . . . . . . . . . . . . . . . . . . . . . 427
11.5 Debugging Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 436
11.6 Example: Multi-Digit Number Recognition . . . . . . . . . . . . . 440
12 Applications 443
12.1 Large-Scale Deep Learning . . . . . . . . . . . . . . . . . . . . . . 443
12.2 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
12.3 Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 458
12.4 Natural Language Processing . . . . . . . . . . . . . . . . . . . . 461
12.5 Other Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 478
III Deep Learning Research 486
13 Linear Factor Models 489
13.1 Probabilistic PCA and Factor Analysis . . . . . . . . . . . . . . . 490
13.2 Independent Component Analysis (ICA) . . . . . . . . . . . . . . 491
13.3 Slow Feature Analysis . . . . . . . . . . . . . . . . . . . . . . . . 493
13.4 Sparse Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
iv
CONTENTS
13.5 Manifold Interpretation of PCA . . . . . . . . . . . . . . . . . . . 499
14 Autoencoders 502
14.1 Undercomplete Autoencoders . . . . . . . . . . . . . . . . . . . . 503
14.2 Regularized Autoencoders . . . . . . . . . . . . . . . . . . . . . . 504
14.3 Representational Power, Layer Size and Depth . . . . . . . . . . . 508
14.4 Stochastic Encoders and Decoders . . . . . . . . . . . . . . . . . . 509
14.5 Denoising Autoencoders . . . . . . . . . . . . . . . . . . . . . . . 510
14.6 Learning Manifolds with Autoencoders . . . . . . . . . . . . . . . 515
14.7 Contractive Autoencoders . . . . . . . . . . . . . . . . . . . . . . 521
14.8 Predictive Sparse Decomposition . . . . . . . . . . . . . . . . . . 523
14.9 Applications of Autoencoders . . . . . . . . . . . . . . . . . . . . 524
15 Representation Learning 526
15.1 Greedy Layer-Wise Unsupervised Pretraining . . . . . . . . . . . 528
15.2 Transfer Learning and Domain Adaptation . . . . . . . . . . . . . 536
15.3 Semi-Supervised Disentangling of Causal Factors . . . . . . . . . 541
15.4 Distributed Representation . . . . . . . . . . . . . . . . . . . . . . 546
15.5 Exponential Gains from Depth . . . . . . . . . . . . . . . . . . . 553
15.6 Providing Clues to Discover Underlying Causes . . . . . . . . . . 554
16 Structured Probabilistic Models for Deep Learning 558
16.1 The Challenge of Unstructured Modeling . . . . . . . . . . . . . . 559
16.2 Using Graphs to Describe Model Structure . . . . . . . . . . . . . 563
16.3 Sampling from Graphical Models . . . . . . . . . . . . . . . . . . 580
16.4 Advantages of Structured Modeling . . . . . . . . . . . . . . . . . 582
16.5 Learning about Dependencies . . . . . . . . . . . . . . . . . . . . 582
16.6 Inference and Approximate Inference . . . . . . . . . . . . . . . . 584
16.7 The Deep Learning Approach to Structured Probabilistic Models 585
17 Monte Carlo Methods 590
17.1 Sampling and Monte Carlo Methods . . . . . . . . . . . . . . . . 590
17.2 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 592
17.3 Markov Chain Monte Carlo Methods . . . . . . . . . . . . . . . . 595
17.4 Gibbs Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
17.5 The Challenge of Mixing between Separated Modes . . . . . . . . 599
18 Confronting the Partition Function 605
18.1 The Log-Likelihood Gradient . . . . . . . . . . . . . . . . . . . . 606
18.2 Stochastic Maximum Likelihood and Contrastive Divergence . . . 607
v
CONTENTS
18.3 Pseudolikelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
18.4 Score Matching and Ratio Matching . . . . . . . . . . . . . . . . 617
18.5 Denoising Score Matching . . . . . . . . . . . . . . . . . . . . . . 619
18.6 Noise-Contrastive Estimation . . . . . . . . . . . . . . . . . . . . 620
18.7 Estimating the Partition Function . . . . . . . . . . . . . . . . . . 623
19 Approximate Inference 631
19.1 Inference as Optimization . . . . . . . . . . . . . . . . . . . . . . 633
19.2 Expectation Maximization . . . . . . . . . . . . . . . . . . . . . . 634
19.3 MAP Inference and Sparse Coding . . . . . . . . . . . . . . . . . 635
19.4 Variational Inference and Learning . . . . . . . . . . . . . . . . . 638
19.5 Learned Approximate Inference . . . . . . . . . . . . . . . . . . . 651
20 Deep Generative Models 654
20.1 Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . . . . 654
20.2 Restricted Boltzmann Machines . . . . . . . . . . . . . . . . . . . 656
20.3 Deep Belief Networks . . . . . . . . . . . . . . . . . . . . . . . . . 660
20.4 Deep Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . 663
20.5 Boltzmann Machines for Real-Valued Data . . . . . . . . . . . . . 676
20.6 Convolutional Boltzmann Machines . . . . . . . . . . . . . . . . . 683
20.7 Boltzmann Machines for Structured or Sequential Outputs . . . . 685
20.8 Other Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . 686
20.9 Back-Propagation through Random Operations . . . . . . . . . . 687
20.10 Directed Generative Nets . . . . . . . . . . . . . . . . . . . . . . . 692
20.11 Drawing Samples from Autoencoders . . . . . . . . . . . . . . . . 711
20.12 Generative Stochastic Networks . . . . . . . . . . . . . . . . . . . 714
20.13 Other Generation Schemes . . . . . . . . . . . . . . . . . . . . . . 716
20.14 Evaluating Generative Models . . . . . . . . . . . . . . . . . . . . 717
20.15 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
Bibliography 721
Index 777
vi
Website
www.deeplearningbook.org
This book is accompanied by the above website. The website provides a
variety of supplementary material, including exercises, lecture slides, corrections of
mistakes, and other resources that should be useful to both readers and instructors.
vii
Acknowledgments
This book would not have been possible without the contributions of many people.
We would like to thank those who commented on our proposal for the book
and helped plan its contents and organization: Guillaume Alain, Kyunghyun Cho,
Çağlar Gülçehre, David Krueger, Hugo Larochelle, Razvan Pascanu and Thomas
Rohée.
We would like to thank the people who offered feedback on the content of the
book itself. Some offered feedback on many chapters: Martín Abadi, Guillaume
Alain, Ion Androutsopoulos, Fred Bertsch, Olexa Bilaniuk, Ufuk Can Biçici, Matko
Bošnjak, John Boersma, Greg Brockman, Alexandre de Brébisson, Pierre Luc
Carrier, Sarath Chandar, Pawel Chilinski, Mark Daoust, Oleg Dashevskii, Laurent
Dinh, Stephan Dreseitl, Jim Fan, Miao Fan, Meire Fortunato, Frédéric Francis,
Nando de Freitas, Çağlar Gülçehre, Jurgen Van Gael, Javier Alonso García,
Jonathan Hunt, Gopi Jeyaram, Chingiz Kabytayev, Lukasz Kaiser, Varun Kanade,
Asifullah Khan, Akiel Khan, John King, Diederik P. Kingma, Yann LeCun, Rudolf
Mathey, Matías Mattamala, Abhinav Maurya, Kevin Murphy, Oleg Mürk, Roman
Novak, Augustus Q. Odena, Simon Pavlik, Karl Pichotta, Eddie Pierce, Kari Pulli,
Roussel Rahman, Tapani Raiko, Anurag Ranjan, Johannes Roith, Mihaela Rosca,
Halis Sak, César Salgado, Grigory Sapunov, Yoshinori Sasaki, Mike Schuster,
Julian Serban, Nir Shabat, Ken Shirriff, Andre Simpelo, Scott Stanley, David
Sussillo, Ilya Sutskever, Carles Gelada Sáez, Graham Taylor, Valentin Tolmer,
Massimiliano Tomassoli, An Tran, Shubhendu Trivedi, Alexey Umnov, Vincent
Vanhoucke, Marco Visentini-Scarzanella, Martin Vita, David Warde-Farley, Dustin
Webb, Kelvin Xu, Wei Xue, Ke Yang, Li Yao, Zygmunt Zając and Ozan Çağlayan.
We would also like to thank those who provided us with useful feedback on
individual chapters:
• Notation: Zhang Yuanhang.
• Chapter , : Yusuf Akgul, Sebastien Bratieres, Samira Ebrahimi,
1 Introduction
viii
CONTENTS
Charlie Gorichanaz, Brendan Loudermilk, Eric Morris, Cosmin Pârvulescu
and Alfredo Solano.
• Chapter , : Amjad Almahairi, Nikola Banić, Kevin Bennett,
2 Linear Algebra
Philippe Castonguay, Oscar Chang, Eric Fosler-Lussier, Andrey Khalyavin,
Sergey Oreshkov, István Petrás, Dennis Prangle, Thomas Rohée, Gitanjali
Gulve Sehgal, Colby Toland, Alessandro Vitale and Bob Welland.
• Chapter , : John Philip Anderson, Kai
3 Probability and Information Theory
Arulkumaran, Vincent Dumoulin, Rui Fa, Stephan Gouws, Artem Oboturov,
Antti Rasmus, Alexey Surkov and Volker Tresp.
• Chapter , : Tran Lam AnIan Fischer and Hu
4 Numerical Computation
Yuhuang.
• Chapter , : Dzmitry Bahdanau, Justin Domingue,
5 Machine Learning Basics
Nikhil Garg, Makoto Otsuka, Bob Pepin, Philip Popien, Emmanuel Rayner,
Peter Shepard, Kee-Bong Song, Zheng Sun and Andy Wu.
• Chapter ,
6 Deep Feedforward Networks: Uriel Berdugo, Fabrizio Bottarel,
Elizabeth Burl, Ishan Durugkar, Jeff Hlywa, Jong Wook Kim, David Krueger
and Aditya Kumar Praharaj.
• Chapter , : Morten Kolbæk, Kshitij Lauria,
7 Regularization for Deep Learning
Inkyu Lee, Sunil Mohan, Hai Phong Phan and Joshua Salisbury.
• Chapter ,
8 Optimization for Training Deep Models: Marcel Ackermann, Peter
Armitage, Rowel Atienza, Andrew Brock, Tegan Maharaj, James Martens,
Kashif Rasul, Klaus Strobl and Nicholas Turner.
• Chapter ,
9 Convolutional Networks: Martín Arjovsky, Eugene Brevdo, Kon-
stantin Divilov, Eric Jensen, Mehdi Mirza, Alex Paino, Marjorie Sayer, Ryan
Stout and Wentao Wu.
• Chapter ,
10 Sequence Modeling: Recurrent and Recursive Nets: Gökçen
Eraslan, Steven Hickson, Razvan Pascanu, Lorenzo von Ritter, Rui Rodrigues,
Dmitriy Serdyuk, Dongyu Shi and Kaiyu Yang.
• Chapter , : Daniel Beckstein.
11 Practical Methodology
• Chapter , : George Dahl, Vladimir Nekrasov and Ribana
12 Applications
Roscher.
• Chapter ,
13 Linear Factor Models: Jayanth Koushik.
ix
CONTENTS
• Chapter , : Kunal Ghosh.
15 Representation Learning
• Chapter , : Minh Lê
16 Structured Probabilistic Models for Deep Learning
and Anton Varfolom.
• Chapter ,
18 Confronting the Partition Function: Sam Bowman.
• Chapter , : Yujia Bao.
19 Approximate Inference
• Chapter ,
20 Deep Generative Models: Nicolas Chapados, Daniel Galvez,
Wenming Ma, Fady Medhat, Shakir Mohamed and Grégoire Montavon.
• Bibliography: Lukas Michelbacher and Leslie N. Smith.
We also want to thank those who allowed us to reproduce images, figures or
data from their publications. We indicate their contributions in the figure captions
throughout the text.
We would like to thank Lu Wang for writing pdf2htmlEX, which we used to
make the web version of the book, and for offering support to improve the quality
of the resulting HTML.
We would like to thank Ian’s wife Daniela Flori Goodfellow for patiently
supporting Ian during the writing of the book as well as for help with proofreading.
We would like to thank the Google Brain team for providing an intellectual
environment where Ian could devote a tremendous amount of time to writing this
book and receive feedback and guidance from colleagues. We would especially like
to thank Ian’s former manager, Greg Corrado, and his current manager, Samy
Bengio, for their support of this project. Finally, we would like to thank Geoffrey
Hinton for encouragement when writing was difficult.
x
Notation
This section provides a concise reference describing the notation used throughout
this book. If you are unfamiliar with any of the corresponding mathematical
concepts, we describe most of these ideas in chapters 2–4.
Numbers and Arrays
a A scalar (integer or real)
a A vector
A A matrix
A A tensor
In Identity matrix with rows and columns
n n
I Identity matrix with dimensionality implied by
context
e( )
i
Standard basis vector [0, . . . , 0, 1,0, . . . ,0] with a
1 at position i
diag( )
a A square, diagonal matrix with diagonal entries
given by a
a A scalar random variable
a A vector-valued random variable
A A matrix-valued random variable
xi
CONTENTS
Sets and Graphs
A A set
R The set of real numbers
{ }
0 1
, The set containing 0 and 1
{ }
0 1
, , . . . , n The set of all integers between and
0 n
[ ]
a, b The real interval including and
a b
( ]
a, b The real interval excluding but including
a b
A B
 Set subtraction, i.e., the set containing the ele-
ments of that are not in
A B
G A graph
PaG(xi) The parents of xi in G
Indexing
ai Element i of vector a, with indexing starting at 1
a−i All elements of vector except for element
a i
Ai,j Element of matrix
i, j A
Ai,: Row of matrix
i A
A:,i Column of matrix
i A
Ai,j,k Element of a 3-D tensor
( )
i, j, k A
A: :
, ,i 2-D slice of a 3-D tensor
ai Element of the random vector
i a
Linear Algebra Operations
A
Transpose of matrix A
A+
Moore-Penrose pseudoinverse of A
A B
 Element-wise (Hadamard) product of and
A B
det( )
A Determinant of A
xii
CONTENTS
Calculus
dy
dx
Derivative of with respect to
y x
∂y
∂x
Partial derivative of with respect to
y x
∇xy Gradient of with respect to
y x
∇X y Matrix derivatives of with respect to
y X
∇Xy Tensor containing derivatives of y with respect to
X
∂f
∂x
Jacobian matrix J ∈ Rm n
× of f : Rn → Rm
∇2
xf f f
( ) (
x or H )( )
x The Hessian matrix of at input point x

f d
( )
x x Definite integral over the entire domain of x

S
f d
( )
x x x
Definite integral with respect to over the set S
Probability and Information Theory
a b The random variables a and b are independent
⊥
a b c They are conditionally independent given c
⊥ |
P( )
a A probability distribution over a discrete variable
p( )
a A probability distribution over a continuous vari-
able, or over a variable whose type has not been
specified
a Random variable a has distribution
∼ P P
Ex∼P[ ( )] ( ) ( ) ( )
f x or Ef x Expectation of f x with respect to P x
Var( ( ))
f x Variance of under x
f x
( ) P( )
Cov( ( ) ( ))
f x , g x Covariance of and under x
f x
( ) g x
( ) P( )
H( )
x Shannon entropy of the random variable x
DKL( )
P Q
 Kullback-Leibler divergence of P and Q
N( ; )
x µ, Σ Gaussian distribution over x with mean µ and
covariance Σ
xiii
CONTENTS
Functions
f f
: A B
→ The function with domain and range
A B
f g f g
◦ Composition of the functions and
f( ; )
x θ A function of x parametrized by θ. (Sometimes
we write f(x) and omit the argument θ to lighten
notation)
log x x
Natural logarithm of
σ x
( ) Logistic sigmoid,
1
1 + exp( )
−x
ζ x x
( ) log(1 + exp(
Softplus, ))
|| ||
x p Lp
norm of x
|| ||
x L2 norm of x
x+
Positive part of , i.e.,
x max(0 )
, x
1condition is 1 if the condition is true, 0 otherwise
Sometimes we use a function f whose argument is a scalar but apply it to a
vector, matrix, or tensor: f(x), f(X), or f(X). This denotes the application of f
to the array element-wise. For example, if C = σ(X), then Ci,j,k = σ(Xi,j,k) for all
valid values of , and .
i j k
Datasets and Distributions
pdata The data generating distribution
p̂data The empirical distribution defined by the training
set
X A set of training examples
x( )
i
The -th example (input) from a dataset
i
y( )
i
or y( )
i
The target associated with x( )
i
for supervised learn-
ing
X The m n
× matrix with input example x( )
i in row
Xi,:
xiv
Chapter 1
Introduction
Inventors have long dreamed of creating machines that think. This desire dates
back to at least the time of ancient Greece. The mythical figures Pygmalion,
Daedalus, and Hephaestus may all be interpreted as legendary inventors, and
Galatea, Talos, and Pandora may all be regarded as artificial life ( ,
Ovid and Martin
2004 Sparkes 1996 Tandy 1997
; , ; , ).
When programmable computers were first conceived, people wondered whether
such machines might become intelligent, over a hundred years before one was
built (Lovelace 1842
, ). Today, artificial intelligence (AI) is a thriving field with
many practical applications and active research topics. We look to intelligent
software to automate routine labor, understand speech or images, make diagnoses
in medicine and support basic scientific research.
In the early days of artificial intelligence, the field rapidly tackled and solved
problems that are intellectually difficult for human beings but relatively straight-
forward for computers—problems that can be described by a list of formal, math-
ematical rules. The true challenge to artificial intelligence proved to be solving
the tasks that are easy for people to perform but hard for people to describe
formally—problems that we solve intuitively, that feel automatic, like recognizing
spoken words or faces in images.
This book is about a solution to these more intuitive problems. This solution is
to allow computers to learn from experience and understand the world in terms of a
hierarchy of concepts, with each concept defined in terms of its relation to simpler
concepts. By gathering knowledge from experience, this approach avoids the need
for human operators to formally specify all of the knowledge that the computer
needs. The hierarchy of concepts allows the computer to learn complicated concepts
by building them out of simpler ones. If we draw a graph showing how these
1
CHAPTER 1. INTRODUCTION
concepts are built on top of each other, the graph is deep, with many layers. For
this reason, we call this approach to AI .
deep learning
Many of the early successes of AI took place in relatively sterile and formal
environments and did not require computers to have much knowledge about
the world. For example, IBM’s Deep Blue chess-playing system defeated world
champion Garry Kasparov in 1997 ( , ). Chess is of course a very simple
Hsu 2002
world, containing only sixty-four locations and thirty-two pieces that can move
in only rigidly circumscribed ways. Devising a successful chess strategy is a
tremendous accomplishment, but the challenge is not due to the difficulty of
describing the set of chess pieces and allowable moves to the computer. Chess
can be completely described by a very brief list of completely formal rules, easily
provided ahead of time by the programmer.
Ironically, abstract and formal tasks that are among the most difficult mental
undertakings for a human being are among the easiest for a computer. Computers
have long been able to defeat even the best human chess player, but are only
recently matching some of the abilities of average human beings to recognize objects
or speech. A person’s everyday life requires an immense amount of knowledge
about the world. Much of this knowledge is subjective and intuitive, and therefore
difficult to articulate in a formal way. Computers need to capture this same
knowledge in order to behave in an intelligent way. One of the key challenges in
artificial intelligence is how to get this informal knowledge into a computer.
Several artificial intelligence projects have sought to hard-code knowledge about
the world in formal languages. A computer can reason about statements in these
formal languages automatically using logical inference rules. This is known as the
knowledge base approach to artificial intelligence. None of these projects has led
to a major success. One of the most famous such projects is Cyc ( ,
Lenat and Guha
1989). Cyc is an inference engine and a database of statements in a language
called CycL. These statements are entered by a staff of human supervisors. It is an
unwieldy process. People struggle to devise formal rules with enough complexity
to accurately describe the world. For example, Cyc failed to understand a story
about a person named Fred shaving in the morning ( , ). Its inference
Linde 1992
engine detected an inconsistency in the story: it knew that people do not have
electrical parts, but because Fred was holding an electric razor, it believed the
entity “FredWhileShaving” contained electrical parts. It therefore asked whether
Fred was still a person while he was shaving.
The difficulties faced by systems relying on hard-coded knowledge suggest
that AI systems need the ability to acquire their own knowledge, by extracting
patterns from raw data. This capability is known as machine learning. The
2
CHAPTER 1. INTRODUCTION
introduction of machine learning allowed computers to tackle problems involving
knowledge of the real world and make decisions that appear subjective. A simple
machine learning algorithm called logistic regression can determine whether to
recommend cesarean delivery (Mor-Yosef 1990
et al., ). A simple machine learning
algorithm called naive Bayes can separate legitimate e-mail from spam e-mail.
The performance of these simple machine learning algorithms depends heavily
on the representation of the data they are given. For example, when logistic
regression is used to recommend cesarean delivery, the AI system does not examine
the patient directly. Instead, the doctor tells the system several pieces of relevant
information, such as the presence or absence of a uterine scar. Each piece of
information included in the representation of the patient is known as a feature.
Logistic regression learns how each of these features of the patient correlates with
various outcomes. However, it cannot influence the way that the features are
defined in any way. If logistic regression was given an MRI scan of the patient,
rather than the doctor’s formalized report, it would not be able to make useful
predictions. Individual pixels in an MRI scan have negligible correlation with any
complications that might occur during delivery.
This dependence on representations is a general phenomenon that appears
throughout computer science and even daily life. In computer science, opera-
tions such as searching a collection of data can proceed exponentially faster if
the collection is structured and indexed intelligently. People can easily perform
arithmetic on Arabic numerals, but find arithmetic on Roman numerals much
more time-consuming. It is not surprising that the choice of representation has an
enormous effect on the performance of machine learning algorithms. For a simple
visual example, see figure .
1.1
Many artificial intelligence tasks can be solved by designing the right set of
features to extract for that task, then providing these features to a simple machine
learning algorithm. For example, a useful feature for speaker identification from
sound is an estimate of the size of speaker’s vocal tract. It therefore gives a strong
clue as to whether the speaker is a man, woman, or child.
However, for many tasks, it is difficult to know what features should be extracted.
For example, suppose that we would like to write a program to detect cars in
photographs. We know that cars have wheels, so we might like to use the presence
of a wheel as a feature. Unfortunately, it is difficult to describe exactly what a
wheel looks like in terms of pixel values. A wheel has a simple geometric shape but
its image may be complicated by shadows falling on the wheel, the sun glaring off
the metal parts of the wheel, the fender of the car or an object in the foreground
obscuring part of the wheel, and so on.
3
CHAPTER 1. INTRODUCTION






Figure 1.1: Example of different representations: suppose we want to separate two
categories of data by drawing a line between them in a scatterplot. In the plot on the left,
we represent some data using Cartesian coordinates, and the task is impossible. In the plot
on the right, we represent the data with polar coordinates and the task becomes simple to
solve with a vertical line. Figure produced in collaboration with David Warde-Farley.
One solution to this problem is to use machine learning to discover not only
the mapping from representation to output but also the representation itself.
This approach is known as representation learning. Learned representations
often result in much better performance than can be obtained with hand-designed
representations. They also allow AI systems to rapidly adapt to new tasks, with
minimal human intervention. A representation learning algorithm can discover a
good set of features for a simple task in minutes, or a complex task in hours to
months. Manually designing features for a complex task requires a great deal of
human time and effort; it can take decades for an entire community of researchers.
The quintessential example of a representation learning algorithm is the au-
toencoder. An autoencoder is the combination of an encoder function that
converts the input data into a different representation, and a decoder function
that converts the new representation back into the original format. Autoencoders
are trained to preserve as much information as possible when an input is run
through the encoder and then the decoder, but are also trained to make the new
representation have various nice properties. Different kinds of autoencoders aim to
achieve different kinds of properties.
When designing features or algorithms for learning features, our goal is usually
to separate the factors of variation that explain the observed data. In this
context, we use the word “factors” simply to refer to separate sources of influence;
the factors are usually not combined by multiplication. Such factors are often not
4
CHAPTER 1. INTRODUCTION
quantities that are directly observed. Instead, they may exist either as unobserved
objects or unobserved forces in the physical world that affect observable quantities.
They may also exist as constructs in the human mind that provide useful simplifying
explanations or inferred causes of the observed data. They can be thought of as
concepts or abstractions that help us make sense of the rich variability in the data.
When analyzing a speech recording, the factors of variation include the speaker’s
age, their sex, their accent and the words that they are speaking. When analyzing
an image of a car, the factors of variation include the position of the car, its color,
and the angle and brightness of the sun.
A major source of difficulty in many real-world artificial intelligence applications
is that many of the factors of variation influence every single piece of data we are
able to observe. The individual pixels in an image of a red car might be very close
to black at night. The shape of the car’s silhouette depends on the viewing angle.
Most applications require us to the factors of variation and discard the
disentangle
ones that we do not care about.
Of course, it can be very difficult to extract such high-level, abstract features
from raw data. Many of these factors of variation, such as a speaker’s accent,
can be identified only using sophisticated, nearly human-level understanding of
the data. When it is nearly as difficult to obtain a representation as to solve the
original problem, representation learning does not, at first glance, seem to help us.
Deep learning solves this central problem in representation learning by intro-
ducing representations that are expressed in terms of other, simpler representations.
Deep learning allows the computer to build complex concepts out of simpler con-
cepts. Figure shows how a deep learning system can represent the concept of
1.2
an image of a person by combining simpler concepts, such as corners and contours,
which are in turn defined in terms of edges.
The quintessential example of a deep learning model is the feedforward deep
network or multilayer perceptron (MLP). A multilayer perceptron is just a
mathematical function mapping some set of input values to output values. The
function is formed by composing many simpler functions. We can think of each
application of a different mathematical function as providing a new representation
of the input.
The idea of learning the right representation for the data provides one perspec-
tive on deep learning. Another perspective on deep learning is that depth allows the
computer to learn a multi-step computer program. Each layer of the representation
can be thought of as the state of the computer’s memory after executing another
set of instructions in parallel. Networks with greater depth can execute more
instructions in sequence. Sequential instructions offer great power because later
5
CHAPTER 1. INTRODUCTION
Visible layer
(input pixels)
1st hidden layer
(edges)
2nd hidden layer
(corners and
contours)
3rd hidden layer
(object parts)
CAR PERSON ANIMAL
Output
(object identity)
Figure 1.2: Illustration of a deep learning model. It is difficult for a computer to understand
the meaning of raw sensory input data, such as this image represented as a collection
of pixel values. The function mapping from a set of pixels to an object identity is very
complicated. Learning or evaluating this mapping seems insurmountable if tackled directly.
Deep learning resolves this difficulty by breaking the desired complicated mapping into a
series of nested simple mappings, each described by a different layer of the model. The
input is presented at the visible layer, so named because it contains the variables that
we are able to observe. Then a series of hidden layers extracts increasingly abstract
features from the image. These layers are called “hidden” because their values are not given
in the data; instead the model must determine which concepts are useful for explaining
the relationships in the observed data. The images here are visualizations of the kind
of feature represented by each hidden unit. Given the pixels, the first layer can easily
identify edges, by comparing the brightness of neighboring pixels. Given the first hidden
layer’s description of the edges, the second hidden layer can easily search for corners and
extended contours, which are recognizable as collections of edges. Given the second hidden
layer’s description of the image in terms of corners and contours, the third hidden layer
can detect entire parts of specific objects, by finding specific collections of contours and
corners. Finally, this description of the image in terms of the object parts it contains can
be used to recognize the objects present in the image. Images reproduced with permission
from Zeiler and Fergus 2014
( ).
6
CHAPTER 1. INTRODUCTION
x1
x1
σ
w1
w1
×
x2
x2
w2
w2
×
+
Element
Set
+
×
σ
x
x
w
w
Element
Set
Logistic
Regression
Logistic
Regression
Figure 1.3: Illustration of computational graphs mapping an input to an output where
each node performs an operation. Depth is the length of the longest path from input to
output but depends on the definition of what constitutes a possible computational step.
The computation depicted in these graphs is the output of a logistic regression model,
σ(wT
x), where σ is the logistic sigmoid function. If we use addition, multiplication and
logistic sigmoids as the elements of our computer language, then this model has depth
three. If we view logistic regression as an element itself, then this model has depth one.
instructions can refer back to the results of earlier instructions. According to this
view of deep learning, not all of the information in a layer’s activations necessarily
encodes factors of variation that explain the input. The representation also stores
state information that helps to execute a program that can make sense of the input.
This state information could be analogous to a counter or pointer in a traditional
computer program. It has nothing to do with the content of the input specifically,
but it helps the model to organize its processing.
There are two main ways of measuring the depth of a model. The first view is
based on the number of sequential instructions that must be executed to evaluate
the architecture. We can think of this as the length of the longest path through
a flow chart that describes how to compute each of the model’s outputs given
its inputs. Just as two equivalent computer programs will have different lengths
depending on which language the program is written in, the same function may
be drawn as a flowchart with different depths depending on which functions we
allow to be used as individual steps in the flowchart. Figure illustrates how this
1.3
choice of language can give two different measurements for the same architecture.
Another approach, used by deep probabilistic models, regards the depth of a
model as being not the depth of the computational graph but the depth of the
graph describing how concepts are related to each other. In this case, the depth
7
CHAPTER 1. INTRODUCTION
of the flowchart of the computations needed to compute the representation of
each concept may be much deeper than the graph of the concepts themselves.
This is because the system’s understanding of the simpler concepts can be refined
given information about the more complex concepts. For example, an AI system
observing an image of a face with one eye in shadow may initially only see one eye.
After detecting that a face is present, it can then infer that a second eye is probably
present as well. In this case, the graph of concepts only includes two layers—a
layer for eyes and a layer for faces—but the graph of computations includes 2n
layers if we refine our estimate of each concept given the other times.
n
Because it is not always clear which of these two views—the depth of the
computational graph, or the depth of the probabilistic modeling graph—is most
relevant, and because different people choose different sets of smallest elements
from which to construct their graphs, there is no single correct value for the
depth of an architecture, just as there is no single correct value for the length of
a computer program. Nor is there a consensus about how much depth a model
requires to qualify as “deep.” However, deep learning can safely be regarded as the
study of models that either involve a greater amount of composition of learned
functions or learned concepts than traditional machine learning does.
To summarize, deep learning, the subject of this book, is an approach to AI.
Specifically, it is a type of machine learning, a technique that allows computer
systems to improve with experience and data. According to the authors of this
book, machine learning is the only viable approach to building AI systems that
can operate in complicated, real-world environments. Deep learning is a particular
kind of machine learning that achieves great power and flexibility by learning to
represent the world as a nested hierarchy of concepts, with each concept defined in
relation to simpler concepts, and more abstract representations computed in terms
of less abstract ones. Figure illustrates the relationship between these different
1.4
AI disciplines. Figure gives a high-level schematic of how each works.
1.5
1.1 Who Should Read This Book?
This book can be useful for a variety of readers, but we wrote it with two main
target audiences in mind. One of these target audiences is university students
(undergraduate or graduate) learning about machine learning, including those who
are beginning a career in deep learning and artificial intelligence research. The
other target audience is software engineers who do not have a machine learning
or statistics background, but want to rapidly acquire one and begin using deep
learning in their product or platform. Deep learning has already proven useful in
8
CHAPTER 1. INTRODUCTION
AI
Machine learning
Representation learning
Deep learning
Example:
Knowledge
bases
Example:
Logistic
regression
Example:
Shallow
autoencoders
Example:
MLPs
Figure 1.4: A Venn diagram showing how deep learning is a kind of representation learning,
which is in turn a kind of machine learning, which is used for many but not all approaches
to AI. Each section of the Venn diagram includes an example of an AI technology.
9
CHAPTER 1. INTRODUCTION
Input
Hand-
designed
program
Output
Input
Hand-
designed
features
Mapping from
features
Output
Input
Features
Mapping from
features
Output
Input
Simple
features
Mapping from
features
Output
Additional
layers of more
abstract
features
Rule-based
systems
Classic
machine
learning Representation
learning
Deep
learning
Figure 1.5: Flowcharts showing how the different parts of an AI system relate to each
other within different AI disciplines. Shaded boxes indicate components that are able to
learn from data.
10
CHAPTER 1. INTRODUCTION
many software disciplines including computer vision, speech and audio processing,
natural language processing, robotics, bioinformatics and chemistry, video games,
search engines, online advertising and finance.
This book has been organized into three parts in order to best accommodate a
variety of readers. Part introduces basic mathematical tools and machine learning
I
concepts. Part describes the most established deep learning algorithms that are
II
essentially solved technologies. Part describes more speculative ideas that are
III
widely believed to be important for future research in deep learning.
Readers should feel free to skip parts that are not relevant given their interests
or background. Readers familiar with linear algebra, probability, and fundamental
machine learning concepts can skip part , for example, while readers who just want
I
to implement a working system need not read beyond part . To help choose which
II
chapters to read, figure provides a flowchart showing the high-level organization
1.6
of the book.
We do assume that all readers come from a computer science background. We
assume familiarity with programming, a basic understanding of computational
performance issues, complexity theory, introductory level calculus and some of the
terminology of graph theory.
1.2 Historical Trends in Deep Learning
It is easiest to understand deep learning with some historical context. Rather than
providing a detailed history of deep learning, we identify a few key trends:
• Deep learning has had a long and rich history, but has gone by many names
reflecting different philosophical viewpoints, and has waxed and waned in
popularity.
• Deep learning has become more useful as the amount of available training
data has increased.
• Deep learning models have grown in size over time as computer infrastructure
(both hardware and software) for deep learning has improved.
• Deep learning has solved increasingly complicated applications with increasing
accuracy over time.
11
CHAPTER 1. INTRODUCTION
1. Introduction
Part I: Applied Math and Machine Learning Basics
2. Linear Algebra
3. Probability and
Information Theory
4. Numerical
Computation
5. Machine Learning
Basics
Part II: Deep Networks: Modern Practices
6. Deep Feedforward
Networks
7. Regularization 8. Optimization 9. CNNs 10. RNNs
11. Practical
Methodology
12. Applications
Part III: Deep Learning Research
13. Linear Factor
Models
14. Autoencoders
15. Representation
Learning
16. Structured
Probabilistic Models
17. Monte Carlo
Methods
18. Partition
Function
19. Inference
20. Deep Generative
Models
Figure 1.6: The high-level organization of the book. An arrow from one chapter to another
indicates that the former chapter is prerequisite material for understanding the latter.
12
CHAPTER 1. INTRODUCTION
1.2.1 The Many Names and Changing Fortunes of Neural Net-
works
We expect that many readers of this book have heard of deep learning as an
exciting new technology, and are surprised to see a mention of “history” in a book
about an emerging field. In fact, deep learning dates back to the 1940s. Deep
learning only appears to be new, because it was relatively unpopular for several
years preceding its current popularity, and because it has gone through many
different names, and has only recently become called “deep learning.” The field
has been rebranded many times, reflecting the influence of different researchers
and different perspectives.
A comprehensive history of deep learning is beyond the scope of this textbook.
However, some basic context is useful for understanding deep learning. Broadly
speaking, there have been three waves of development of deep learning: deep
learning known as cybernetics in the 1940s–1960s, deep learning known as
connectionism in the 1980s–1990s, and the current resurgence under the name
deep learning beginning in 2006. This is quantitatively illustrated in figure .
1.7
Some of the earliest learning algorithms we recognize today were intended
to be computational models of biological learning, i.e. models of how learning
happens or could happen in the brain. As a result, one of the names that deep
learning has gone by is artificial neural networks (ANNs). The corresponding
perspective on deep learning models is that they are engineered systems inspired
by the biological brain (whether the human brain or the brain of another animal).
While the kinds of neural networks used for machine learning have sometimes
been used to understand brain function ( , ), they are
Hinton and Shallice 1991
generally not designed to be realistic models of biological function. The neural
perspective on deep learning is motivated by two main ideas. One idea is that
the brain provides a proof by example that intelligent behavior is possible, and a
conceptually straightforward path to building intelligence is to reverse engineer the
computational principles behind the brain and duplicate its functionality. Another
perspective is that it would be deeply interesting to understand the brain and the
principles that underlie human intelligence, so machine learning models that shed
light on these basic scientific questions are useful apart from their ability to solve
engineering applications.
The modern term “deep learning” goes beyond the neuroscientific perspective
on the current breed of machine learning models. It appeals to a more general
principle of learning multiple levels of composition, which can be applied in machine
learning frameworks that are not necessarily neurally inspired.
13
CHAPTER 1. INTRODUCTION
1940 1950 1960 1970 1980 1990 2000
Year
0.000000
0.000050
0.000100
0.000150
0.000200
0.000250
Frequency
of
Word
or
Phrase
cybernetics
(connectionism + neural networks)
Figure 1.7: The figure shows two of the three historical waves of artificial neural nets
research, as measured by the frequency of the phrases “cybernetics” and “connectionism” or
“neural networks” according to Google Books (the third wave is too recent to appear). The
first wave started with cybernetics in the 1940s–1960s, with the development of theories
of biological learning ( , ; , ) and implementations of
McCulloch and Pitts 1943 Hebb 1949
the first models such as the perceptron (Rosenblatt 1958
, ) allowing the training of a single
neuron. The second wave started with the connectionist approach of the 1980–1995 period,
with back-propagation ( , ) to train a neural network with one or two
Rumelhart et al. 1986a
hidden layers. The current and third wave, deep learning, started around 2006 (Hinton
et al. et al. et al.
, ;
2006 Bengio , ;
2007 Ranzato , ), and is just now appearing in book
2007a
form as of 2016. The other two waves similarly appeared in book form much later than
the corresponding scientific activity occurred.
14
CHAPTER 1. INTRODUCTION
The earliest predecessors of modern deep learning were simple linear models
motivated from a neuroscientific perspective. These models were designed to
take a set of n input values x1, . . . , xn and associate them with an output y.
These models would learn a set of weights w1, . . . , wn and compute their output
f(x w
, ) = x1w1 + · · · + xnwn . This first wave of neural networks research was
known as cybernetics, as illustrated in figure .
1.7
The McCulloch-Pitts Neuron ( , ) was an early model
McCulloch and Pitts 1943
of brain function. This linear model could recognize two different categories of
inputs by testing whether f (x w
, ) is positive or negative. Of course, for the model
to correspond to the desired definition of the categories, the weights needed to be
set correctly. These weights could be set by the human operator. In the 1950s,
the perceptron (Rosenblatt 1958 1962
, , ) became the first model that could learn
the weights defining the categories given examples of inputs from each category.
The adaptive linear element (ADALINE), which dates from about the same
time, simply returned the value of f(x) itself to predict a real number (Widrow
and Hoff 1960
, ), and could also learn to predict these numbers from data.
These simple learning algorithms greatly affected the modern landscape of ma-
chine learning. The training algorithm used to adapt the weights of the ADALINE
was a special case of an algorithm called stochastic gradient descent. Slightly
modified versions of the stochastic gradient descent algorithm remain the dominant
training algorithms for deep learning models today.
Models based on the f(x w
, ) used by the perceptron and ADALINE are called
linear models. These models remain some of the most widely used machine
learning models, though in many cases they are trained in different ways than the
original models were trained.
Linear models have many limitations. Most famously, they cannot learn the
XOR function, where f ([0,1], w) = 1 and f([1,0], w) = 1 but f([1, 1], w) = 0
and f([0, 0], w) = 0. Critics who observed these flaws in linear models caused
a backlash against biologically inspired learning in general (Minsky and Papert,
1969). This was the first major dip in the popularity of neural networks.
Today, neuroscience is regarded as an important source of inspiration for deep
learning researchers, but it is no longer the predominant guide for the field.
The main reason for the diminished role of neuroscience in deep learning
research today is that we simply do not have enough information about the brain
to use it as a guide. To obtain a deep understanding of the actual algorithms used
by the brain, we would need to be able to monitor the activity of (at the very
least) thousands of interconnected neurons simultaneously. Because we are not
able to do this, we are far from understanding even some of the most simple and
15
CHAPTER 1. INTRODUCTION
well-studied parts of the brain ( , ).
Olshausen and Field 2005
Neuroscience has given us a reason to hope that a single deep learning algorithm
can solve many different tasks. Neuroscientists have found that ferrets can learn to
“see” with the auditory processing region of their brain if their brains are rewired
to send visual signals to that area (Von Melchner 2000
et al., ). This suggests that
much of the mammalian brain might use a single algorithm to solve most of the
different tasks that the brain solves. Before this hypothesis, machine learning
research was more fragmented, with different communities of researchers studying
natural language processing, vision, motion planning and speech recognition. Today,
these application communities are still separate, but it is common for deep learning
research groups to study many or even all of these application areas simultaneously.
We are able to draw some rough guidelines from neuroscience. The basic idea of
having many computational units that become intelligent only via their interactions
with each other is inspired by the brain. The Neocognitron (Fukushima 1980
, )
introduced a powerful model architecture for processing images that was inspired
by the structure of the mammalian visual system and later became the basis
for the modern convolutional network ( , ), as we will see in
LeCun et al. 1998b
section . Most neural networks today are based on a model neuron called
9.10
the rectified linear unit. The original Cognitron (Fukushima 1975
, ) introduced
a more complicated version that was highly inspired by our knowledge of brain
function. The simplified modern version was developed incorporating ideas from
many viewpoints, with ( ) and ( ) citing
Nair and Hinton 2010 Glorot et al. 2011a
neuroscience as an influence, and ( ) citing more engineering-
Jarrett et al. 2009
oriented influences. While neuroscience is an important source of inspiration, it
need not be taken as a rigid guide. We know that actual neurons compute very
different functions than modern rectified linear units, but greater neural realism
has not yet led to an improvement in machine learning performance. Also, while
neuroscience has successfully inspired several neural network architectures, we
do not yet know enough about biological learning for neuroscience to offer much
guidance for the learning algorithms we use to train these architectures.
Media accounts often emphasize the similarity of deep learning to the brain.
While it is true that deep learning researchers are more likely to cite the brain as an
influence than researchers working in other machine learning fields such as kernel
machines or Bayesian statistics, one should not view deep learning as an attempt
to simulate the brain. Modern deep learning draws inspiration from many fields,
especially applied math fundamentals like linear algebra, probability, information
theory, and numerical optimization. While some deep learning researchers cite
neuroscience as an important source of inspiration, others are not concerned with
16
CHAPTER 1. INTRODUCTION
neuroscience at all.
It is worth noting that the effort to understand how the brain works on
an algorithmic level is alive and well. This endeavor is primarily known as
“computational neuroscience” and is a separate field of study from deep learning.
It is common for researchers to move back and forth between both fields. The
field of deep learning is primarily concerned with how to build computer systems
that are able to successfully solve tasks requiring intelligence, while the field of
computational neuroscience is primarily concerned with building more accurate
models of how the brain actually works.
In the 1980s, the second wave of neural network research emerged in great
part via a movement called connectionism or parallel distributed process-
ing ( , ; , ). Connectionism arose in
Rumelhart et al. 1986c McClelland et al. 1995
the context of cognitive science. Cognitive science is an interdisciplinary approach
to understanding the mind, combining multiple different levels of analysis. During
the early 1980s, most cognitive scientists studied models of symbolic reasoning.
Despite their popularity, symbolic models were difficult to explain in terms of
how the brain could actually implement them using neurons. The connectionists
began to study models of cognition that could actually be grounded in neural
implementations (Touretzky and Minton 1985
, ), reviving many ideas dating back
to the work of psychologist Donald Hebb in the 1940s ( , ).
Hebb 1949
The central idea in connectionism is that a large number of simple computational
units can achieve intelligent behavior when networked together. This insight
applies equally to neurons in biological nervous systems and to hidden units in
computational models.
Several key concepts arose during the connectionism movement of the 1980s
that remain central to today’s deep learning.
One of these concepts is that of distributed representation (Hinton et al.,
1986). This is the idea that each input to a system should be represented by
many features, and each feature should be involved in the representation of many
possible inputs. For example, suppose we have a vision system that can recognize
cars, trucks, and birds and these objects can each be red, green, or blue. One way
of representing these inputs would be to have a separate neuron or hidden unit
that activates for each of the nine possible combinations: red truck, red car, red
bird, green truck, and so on. This requires nine different neurons, and each neuron
must independently learn the concept of color and object identity. One way to
improve on this situation is to use a distributed representation, with three neurons
describing the color and three neurons describing the object identity. This requires
only six neurons total instead of nine, and the neuron describing redness is able to
17
CHAPTER 1. INTRODUCTION
learn about redness from images of cars, trucks and birds, not only from images
of one specific category of objects. The concept of distributed representation is
central to this book, and will be described in greater detail in chapter .
15
Another major accomplishment of the connectionist movement was the suc-
cessful use of back-propagation to train deep neural networks with internal repre-
sentations and the popularization of the back-propagation algorithm (Rumelhart
et al., ; , ). This algorithm has waxed and waned in popularity
1986a LeCun 1987
but as of this writing is currently the dominant approach to training deep models.
During the 1990s, researchers made important advances in modeling sequences
with neural networks. ( ) and ( ) identified some of
Hochreiter 1991 Bengio et al. 1994
the fundamental mathematical difficulties in modeling long sequences, described in
section .
10.7 Hochreiter and Schmidhuber 1997
( ) introduced the long short-term
memory or LSTM network to resolve some of these difficulties. Today, the LSTM
is widely used for many sequence modeling tasks, including many natural language
processing tasks at Google.
The second wave of neural networks research lasted until the mid-1990s. Ven-
tures based on neural networks and other AI technologies began to make unrealisti-
cally ambitious claims while seeking investments. When AI research did not fulfill
these unreasonable expectations, investors were disappointed. Simultaneously,
other fields of machine learning made advances. Kernel machines ( ,
Boser et al.
1992 Cortes and Vapnik 1995 Schölkopf 1999 Jor-
; , ; et al., ) and graphical models (
dan 1998
, ) both achieved good results on many important tasks. These two factors
led to a decline in the popularity of neural networks that lasted until 2007.
During this time, neural networks continued to obtain impressive performance
on some tasks ( , ; , ). The Canadian Institute
LeCun et al. 1998b Bengio et al. 2001
for Advanced Research (CIFAR) helped to keep neural networks research alive
via its Neural Computation and Adaptive Perception (NCAP) research initiative.
This program united machine learning research groups led by Geoffrey Hinton
at University of Toronto, Yoshua Bengio at University of Montreal, and Yann
LeCun at New York University. The CIFAR NCAP research initiative had a
multi-disciplinary nature that also included neuroscientists and experts in human
and computer vision.
At this point in time, deep networks were generally believed to be very difficult
to train. We now know that algorithms that have existed since the 1980s work
quite well, but this was not apparent circa 2006. The issue is perhaps simply that
these algorithms were too computationally costly to allow much experimentation
with the hardware available at the time.
The third wave of neural networks research began with a breakthrough in
18
CHAPTER 1. INTRODUCTION
2006. Geoffrey Hinton showed that a kind of neural network called a deep belief
network could be efficiently trained using a strategy called greedy layer-wise pre-
training ( , ), which will be described in more detail in section .
Hinton et al. 2006 15.1
The other CIFAR-affiliated research groups quickly showed that the same strategy
could be used to train many other kinds of deep networks ( , ;
Bengio et al. 2007
Ranzato 2007a
et al., ) and systematically helped to improve generalization on
test examples. This wave of neural networks research popularized the use of the
term “deep learning” to emphasize that researchers were now able to train deeper
neural networks than had been possible before, and to focus attention on the
theoretical importance of depth ( , ; ,
Bengio and LeCun 2007 Delalleau and Bengio
2011 Pascanu 2014a Montufar 2014
; et al., ; et al., ). At this time, deep neural
networks outperformed competing AI systems based on other machine learning
technologies as well as hand-designed functionality. This third wave of popularity
of neural networks continues to the time of this writing, though the focus of deep
learning research has changed dramatically within the time of this wave. The
third wave began with a focus on new unsupervised learning techniques and the
ability of deep models to generalize well from small datasets, but today there is
more interest in much older supervised learning algorithms and the ability of deep
models to leverage large labeled datasets.
1.2.2 Increasing Dataset Sizes
One may wonder why deep learning has only recently become recognized as a
crucial technology though the first experiments with artificial neural networks were
conducted in the 1950s. Deep learning has been successfully used in commercial
applications since the 1990s, but was often regarded as being more of an art than
a technology and something that only an expert could use, until recently. It is true
that some skill is required to get good performance from a deep learning algorithm.
Fortunately, the amount of skill required reduces as the amount of training data
increases. The learning algorithms reaching human performance on complex tasks
today are nearly identical to the learning algorithms that struggled to solve toy
problems in the 1980s, though the models we train with these algorithms have
undergone changes that simplify the training of very deep architectures. The most
important new development is that today we can provide these algorithms with
the resources they need to succeed. Figure shows how the size of benchmark
1.8
datasets has increased remarkably over time. This trend is driven by the increasing
digitization of society. As more and more of our activities take place on computers,
more and more of what we do is recorded. As our computers are increasingly
networked together, it becomes easier to centralize these records and curate them
19
CHAPTER 1. INTRODUCTION
into a dataset appropriate for machine learning applications. The age of “Big
Data” has made machine learning much easier because the key burden of statistical
estimation—generalizing well to new data after observing only a small amount
of data—has been considerably lightened. As of 2016, a rough rule of thumb
is that a supervised deep learning algorithm will generally achieve acceptable
performance with around 5,000 labeled examples per category, and will match or
exceed human performance when trained with a dataset containing at least 10
million labeled examples. Working successfully with datasets smaller than this is
an important research area, focusing in particular on how we can take advantage
of large quantities of unlabeled examples, with unsupervised or semi-supervised
learning.
1.2.3 Increasing Model Sizes
Another key reason that neural networks are wildly successful today after enjoying
comparatively little success since the 1980s is that we have the computational
resources to run much larger models today. One of the main insights of connection-
ism is that animals become intelligent when many of their neurons work together.
An individual neuron or small collection of neurons is not particularly useful.
Biological neurons are not especially densely connected. As seen in figure ,
1.10
our machine learning models have had a number of connections per neuron that
was within an order of magnitude of even mammalian brains for decades.
In terms of the total number of neurons, neural networks have been astonishingly
small until quite recently, as shown in figure . Since the introduction of hidden
1.11
units, artificial neural networks have doubled in size roughly every 2.4 years. This
growth is driven by faster computers with larger memory and by the availability
of larger datasets. Larger networks are able to achieve higher accuracy on more
complex tasks. This trend looks set to continue for decades. Unless new technologies
allow faster scaling, artificial neural networks will not have the same number of
neurons as the human brain until at least the 2050s. Biological neurons may
represent more complicated functions than current artificial neurons, so biological
neural networks may be even larger than this plot portrays.
In retrospect, it is not particularly surprising that neural networks with fewer
neurons than a leech were unable to solve sophisticated artificial intelligence prob-
lems. Even today’s networks, which we consider quite large from a computational
systems point of view, are smaller than the nervous system of even relatively
primitive vertebrate animals like frogs.
The increase in model size over time, due to the availability of faster CPUs,
20
CHAPTER 1. INTRODUCTION
1900 1950 1985 2000 2015
Y
100
101
102
103
104
105
106
107
108
109
Dataset
size
(number
examples)
Iris
MNIST
Public SVHN
ImageNet
CIFAR-10
ImageNet10k
ILSVRC 2014
Sports-1M
Rotated T vs. C
T vs. G vs. F
Criminals
Canadian Hansard
WMT
Figure 1.8: Dataset sizes have increased greatly over time. In the early 1900s, statisticians
studied datasets using hundreds or thousands of manually compiled measurements ( ,
Garson
1900 Gosset 1908 Anderson 1935 Fisher 1936
; , ; , ; , ). In the 1950s through 1980s, the pioneers
of biologically inspired machine learning often worked with small, synthetic datasets, such
as low-resolution bitmaps of letters, that were designed to incur low computational cost and
demonstrate that neural networks were able to learn specific kinds of functions (Widrow
and Hoff 1960 Rumelhart 1986b
, ; et al., ). In the 1980s and 1990s, machine learning
became more statistical in nature and began to leverage larger datasets containing tens
of thousands of examples such as the MNIST dataset (shown in figure ) of scans
1.9
of handwritten numbers ( , ). In the first decade of the 2000s, more
LeCun et al. 1998b
sophisticated datasets of this same size, such as the CIFAR-10 dataset (Krizhevsky and
Hinton 2009
, ) continued to be produced. Toward the end of that decade and throughout
the first half of the 2010s, significantly larger datasets, containing hundreds of thousands
to tens of millions of examples, completely changed what was possible with deep learning.
These datasets included the public Street View House Numbers dataset ( ,
Netzer et al.
2011), various versions of the ImageNet dataset ( , , ;
Deng et al. 2009 2010a Russakovsky
et al. et al.
, ), and the Sports-1M dataset (
2014a Karpathy , ). At the top of the
2014
graph, we see that datasets of translated sentences, such as IBM’s dataset constructed
from the Canadian Hansard ( , ) and the WMT 2014 English to French
Brown et al. 1990
dataset (Schwenk 2014
, ) are typically far ahead of other dataset sizes.
21
CHAPTER 1. INTRODUCTION
Figure 1.9: Example inputs from the MNIST dataset. The “NIST” stands for National
Institute of Standards and Technology, the agency that originally collected this data.
The “M” stands for “modified,” since the data has been preprocessed for easier use with
machine learning algorithms. The MNIST dataset consists of scans of handwritten digits
and associated labels describing which digit 0–9 is contained in each image. This simple
classification problem is one of the simplest and most widely used tests in deep learning
research. It remains popular despite being quite easy for modern techniques to solve.
Geoffrey Hinton has described it as “the drosophila of machine learning,” meaning that
it allows machine learning researchers to study their algorithms in controlled laboratory
conditions, much as biologists often study fruit flies.
22
CHAPTER 1. INTRODUCTION
the advent of general purpose GPUs (described in section ), faster network
12.1.2
connectivity and better software infrastructure for distributed computing, is one of
the most important trends in the history of deep learning. This trend is generally
expected to continue well into the future.
1.2.4 Increasing Accuracy, Complexity and Real-World Impact
Since the 1980s, deep learning has consistently improved in its ability to provide
accurate recognition or prediction. Moreover, deep learning has consistently been
applied with success to broader and broader sets of applications.
The earliest deep models were used to recognize individual objects in tightly
cropped, extremely small images ( , ). Since then there has
Rumelhart et al. 1986a
been a gradual increase in the size of images neural networks could process. Modern
object recognition networks process rich high-resolution photographs and do not
have a requirement that the photo be cropped near the object to be recognized
( , ). Similarly, the earliest networks could only recognize
Krizhevsky et al. 2012
two kinds of objects (or in some cases, the absence or presence of a single kind of
object), while these modern networks typically recognize at least 1,000 different
categories of objects. The largest contest in object recognition is the ImageNet
Large Scale Visual Recognition Challenge (ILSVRC) held each year. A dramatic
moment in the meteoric rise of deep learning came when a convolutional network
won this challenge for the first time and by a wide margin, bringing down the
state-of-the-art top-5 error rate from 26.1% to 15.3% ( , ),
Krizhevsky et al. 2012
meaning that the convolutional network produces a ranked list of possible categories
for each image and the correct category appeared in the first five entries of this
list for all but 15.3% of the test examples. Since then, these competitions are
consistently won by deep convolutional nets, and as of this writing, advances in
deep learning have brought the latest top-5 error rate in this contest down to 3.6%,
as shown in figure .
1.12
Deep learning has also had a dramatic impact on speech recognition. After
improving throughout the 1990s, the error rates for speech recognition stagnated
starting in about 2000. The introduction of deep learning ( , ;
Dahl et al. 2010 Deng
et al. et al. et al.
, ;
2010b Seide , ;
2011 Hinton , ) to speech recognition resulted
2012a
in a sudden drop of error rates, with some error rates cut in half. We will explore
this history in more detail in section .
12.3
Deep networks have also had spectacular successes for pedestrian detection and
image segmentation ( , ;
Sermanet et al. 2013 Farabet 2013 Couprie
et al., ; et al.,
2013) and yielded superhuman performance in traffic sign classification (Ciresan
23
CHAPTER 1. INTRODUCTION
1950 1985 2000 2015
101
102
103
104
Connections
per
neuron
1
2
3
4
5
6
7
8
9
10
Fruit fly
Mouse
Cat
Human
Figure 1.10: Initially, the number of connections between neurons in artificial neural
networks was limited by hardware capabilities. Today, the number of connections between
neurons is mostly a design consideration. Some artificial neural networks have nearly as
many connections per neuron as a cat, and it is quite common for other neural networks
to have as many connections per neuron as smaller mammals like mice. Even the human
brain does not have an exorbitant amount of connections per neuron. Biological neural
network sizes from ( ).
Wikipedia 2015
1. Adaptive linear element ( , )
Widrow and Hoff 1960
2. Neocognitron (Fukushima 1980
, )
3. GPU-accelerated convolutional network ( , )
Chellapilla et al. 2006
4. Deep Boltzmann machine (Salakhutdinov and Hinton 2009a
, )
5. Unsupervised convolutional network ( , )
Jarrett et al. 2009
6. GPU-accelerated multilayer perceptron ( , )
Ciresan et al. 2010
7. Distributed autoencoder ( , )
Le et al. 2012
8. Multi-GPU convolutional network ( , )
Krizhevsky et al. 2012
9. COTS HPC unsupervised convolutional network ( , )
Coates et al. 2013
10. GoogLeNet ( , )
Szegedy et al. 2014a
24
CHAPTER 1. INTRODUCTION
et al., ).
2012
At the same time that the scale and accuracy of deep networks has increased,
so has the complexity of the tasks that they can solve. ( )
Goodfellow et al. 2014d
showed that neural networks could learn to output an entire sequence of characters
transcribed from an image, rather than just identifying a single object. Previously,
it was widely believed that this kind of learning required labeling of the individual
elements of the sequence ( , ). Recurrent neural networks,
Gülçehre and Bengio 2013
such as the LSTM sequence model mentioned above, are now used to model
relationships between sequences sequences
and other rather than just fixed inputs.
This sequence-to-sequence learning seems to be on the cusp of revolutionizing
another application: machine translation (Sutskever 2014 Bahdanau
et al., ; et al.,
2015).
This trend of increasing complexity has been pushed to its logical conclusion
with the introduction of neural Turing machines (Graves 2014a
et al., ) that learn
to read from memory cells and write arbitrary content to memory cells. Such
neural networks can learn simple programs from examples of desired behavior. For
example, they can learn to sort lists of numbers given examples of scrambled and
sorted sequences. This self-programming technology is in its infancy, but in the
future could in principle be applied to nearly any task.
Another crowning achievement of deep learning is its extension to the domain of
reinforcement learning. In the context of reinforcement learning, an autonomous
agent must learn to perform a task by trial and error, without any guidance from
the human operator. DeepMind demonstrated that a reinforcement learning system
based on deep learning is capable of learning to play Atari video games, reaching
human-level performance on many tasks ( , ). Deep learning has
Mnih et al. 2015
also significantly improved the performance of reinforcement learning for robotics
( , ).
Finn et al. 2015
Many of these applications of deep learning are highly profitable. Deep learning
is now used by many top technology companies including Google, Microsoft,
Facebook, IBM, Baidu, Apple, Adobe, Netflix, NVIDIA and NEC.
Advances in deep learning have also depended heavily on advances in software
infrastructure. Software libraries such as Theano ( , ;
Bergstra et al. 2010 Bastien
et al. et al.
, ), PyLearn2 (
2012 Goodfellow , ), Torch ( , ),
2013c Collobert et al. 2011b
DistBelief ( , ), Caffe ( , ), MXNet ( , ), and
Dean et al. 2012 Jia 2013 Chen et al. 2015
TensorFlow ( , ) have all supported important research projects or
Abadi et al. 2015
commercial products.
Deep learning has also made contributions back to other sciences. Modern
convolutional networks for object recognition provide a model of visual processing
25
CHAPTER 1. INTRODUCTION
that neuroscientists can study ( , ). Deep learning also provides useful
DiCarlo 2013
tools for processing massive amounts of data and making useful predictions in
scientific fields. It has been successfully used to predict how molecules will interact
in order to help pharmaceutical companies design new drugs ( , ),
Dahl et al. 2014
to search for subatomic particles ( , ), and to automatically parse
Baldi et al. 2014
microscope images used to construct a 3-D map of the human brain (Knowles-
Barley 2014
et al., ). We expect deep learning to appear in more and more scientific
fields in the future.
In summary, deep learning is an approach to machine learning that has drawn
heavily on our knowledge of the human brain, statistics and applied math as it
developed over the past several decades. In recent years, it has seen tremendous
growth in its popularity and usefulness, due in large part to more powerful com-
puters, larger datasets and techniques to train deeper networks. The years ahead
are full of challenges and opportunities to improve deep learning even further and
bring it to new frontiers.
26
CHAPTER 1. INTRODUCTION
1950 1985 2000 2015 2056
10−2
10−1
100
101
102
103
104
105
106
107
108
109
1010
1011
Number
of
neurons
(logarithmic
scale)
1 2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 20
Sponge
Roundworm
Leech
Ant
Bee
Frog
Octopus
Human
Figure 1.11: Since the introduction of hidden units, artificial neural networks have doubled
in size roughly every 2.4 years. Biological neural network sizes from ( ).
Wikipedia 2015
1. Perceptron ( , , )
Rosenblatt 1958 1962
2. Adaptive linear element ( , )
Widrow and Hoff 1960
3. Neocognitron (Fukushima 1980
, )
4. Early back-propagation network ( , )
Rumelhart et al. 1986b
5. Recurrent neural network for speech recognition (Robinson and Fallside 1991
, )
6. Multilayer perceptron for speech recognition ( , )
Bengio et al. 1991
7. Mean field sigmoid belief network ( , )
Saul et al. 1996
8. LeNet-5 ( , )
LeCun et al. 1998b
9. Echo state network ( , )
Jaeger and Haas 2004
10. Deep belief network ( , )
Hinton et al. 2006
11. GPU-accelerated convolutional network ( , )
Chellapilla et al. 2006
12. Deep Boltzmann machine (Salakhutdinov and Hinton 2009a
, )
13. GPU-accelerated deep belief network ( , )
Raina et al. 2009
14. Unsupervised convolutional network ( , )
Jarrett et al. 2009
15. GPU-accelerated multilayer perceptron ( , )
Ciresan et al. 2010
16. OMP-1 network ( , )
Coates and Ng 2011
17. Distributed autoencoder ( , )
Le et al. 2012
18. Multi-GPU convolutional network ( , )
Krizhevsky et al. 2012
19. COTS HPC unsupervised convolutional network ( , )
Coates et al. 2013
20. GoogLeNet ( , )
Szegedy et al. 2014a
27
CHAPTER 1. INTRODUCTION
2010 2011 2012 2013 2014 2015
0 00
.
0 05
.
0 10
.
0 15
.
0 20
.
0 25
.
0 30
.
ILSVRC
classification
error
rate
Figure 1.12: Since deep networks reached the scale necessary to compete in the ImageNet
Large Scale Visual Recognition Challenge, they have consistently won the competition
every year, and yielded lower and lower error rates each time. Data from Russakovsky
et al. et al.
( ) and
2014b He ( ).
2015
28
Part I
Applied Math and Machine
Learning Basics
29
This part of the book introduces the basic mathematical concepts needed to
understand deep learning. We begin with general ideas from applied math that
allow us to define functions of many variables, find the highest and lowest points
on these functions and quantify degrees of belief.
Next, we describe the fundamental goals of machine learning. We describe how
to accomplish these goals by specifying a model that represents certain beliefs,
designing a cost function that measures how well those beliefs correspond with
reality and using a training algorithm to minimize that cost function.
This elementary framework is the basis for a broad variety of machine learning
algorithms, including approaches to machine learning that are not deep. In the
subsequent parts of the book, we develop deep learning algorithms within this
framework.
30
Chapter 2
Linear Algebra
Linear algebra is a branch of mathematics that is widely used throughout science
and engineering. However, because linear algebra is a form of continuous rather
than discrete mathematics, many computer scientists have little experience with it.
A good understanding of linear algebra is essential for understanding and working
with many machine learning algorithms, especially deep learning algorithms. We
therefore precede our introduction to deep learning with a focused presentation of
the key linear algebra prerequisites.
If you are already familiar with linear algebra, feel free to skip this chapter. If
you have previous experience with these concepts but need a detailed reference
sheet to review key formulas, we recommend The Matrix Cookbook (Petersen and
Pedersen 2006
, ). If you have no exposure at all to linear algebra, this chapter
will teach you enough to read this book, but we highly recommend that you also
consult another resource focused exclusively on teaching linear algebra, such as
Shilov 1977
( ). This chapter will completely omit many important linear algebra
topics that are not essential for understanding deep learning.
2.1 Scalars, Vectors, Matrices and Tensors
The study of linear algebra involves several types of mathematical objects:
• Scalars: A scalar is just a single number, in contrast to most of the other
objects studied in linear algebra, which are usually arrays of multiple numbers.
We write scalars in italics. We usually give scalars lower-case variable names.
When we introduce them, we specify what kind of number they are. For
31
CHAPTER 2. LINEAR ALGEBRA
example, we might say “Let s ∈ R be the slope of the line,” while defining a
real-valued scalar, or “Let n ∈ N be the number of units,” while defining a
natural number scalar.
• Vectors: A vector is an array of numbers. The numbers are arranged in
order. We can identify each individual number by its index in that ordering.
Typically we give vectors lower case names written in bold typeface, such
as x. The elements of the vector are identified by writing its name in italic
typeface, with a subscript. The first element of x is x1, the second element
is x2 and so on. We also need to say what kind of numbers are stored in
the vector. If each element is in R, and the vector has n elements, then the
vector lies in the set formed by taking the Cartesian product of R n times,
denoted as Rn. When we need to explicitly identify the elements of a vector,
we write them as a column enclosed in square brackets:
x =





x1
x2
.
.
.
xn





. (2.1)
We can think of vectors as identifying points in space, with each element
giving the coordinate along a different axis.
Sometimes we need to index a set of elements of a vector. In this case, we
define a set containing the indices and write the set as a subscript. For
example, to access x1, x3 and x6, we define the set S = {1, 3,6} and write
xS. We use the − sign to index the complement of a set. For example x−1 is
the vector containing all elements of x except for x1, and x−S is the vector
containing all of the elements of except for
x x1, x3 and x6.
• Matrices: A matrix is a 2-D array of numbers, so each element is identified
by two indices instead of just one. We usually give matrices upper-case
variable names with bold typeface, such as A. If a real-valued matrix A has
a height of m and a width of n, then we say that A ∈ Rm n
× . We usually
identify the elements of a matrix using its name in italic but not bold font,
and the indices are listed with separating commas. For example, A1 1
, is the
upper left entry of A and Am,n is the bottom right entry. We can identify all
of the numbers with vertical coordinate i by writing a “ ” for the horizontal
:
coordinate. For example, Ai,: denotes the horizontal cross section of A with
vertical coordinate i. This is known as the i-th row of A. Likewise, A:,i is
32
CHAPTER 2. LINEAR ALGEBRA
A =


A1 1
, A1 2
,
A2 1
, A2 2
,
A3 1
, A3 2
,

 ⇒ A
=

A1 1
, A2 1
, A3 1
,
A1 2
, A2 2
, A3 2
,

Figure 2.1: The transpose of the matrix can be thought of as a mirror image across the
main diagonal.
the -th of . When we need to explicitly identify the elements of
i column A
a matrix, we write them as an array enclosed in square brackets:

A1 1
, A1 2
,
A2 1
, A2 2
,

. (2.2)
Sometimes we may need to index matrix-valued expressions that are not just
a single letter. In this case, we use subscripts after the expression, but do
not convert anything to lower case. For example, f(A)i,j gives element (i, j)
of the matrix computed by applying the function to .
f A
• Tensors: In some cases we will need an array with more than two axes.
In the general case, an array of numbers arranged on a regular grid with a
variable number of axes is known as a tensor. We denote a tensor named “A”
with this typeface: A. We identify the element of A at coordinates (i, j, k)
by writing Ai,j,k.
One important operation on matrices is the transpose. The transpose of a
matrix is the mirror image of the matrix across a diagonal line, called the main
diagonal, running down and to the right, starting from its upper left corner. See
figure for a graphical depiction of this operation. We denote the transpose of a
2.1
matrix as
A A, and it is defined such that
(A
)i,j = Aj,i. (2.3)
Vectors can be thought of as matrices that contain only one column. The
transpose of a vector is therefore a matrix with only one row. Sometimes we
33
CHAPTER 2. LINEAR ALGEBRA
define a vector by writing out its elements in the text inline as a row matrix,
then using the transpose operator to turn it into a standard column vector, e.g.,
x = [x1, x2, x3]
.
A scalar can be thought of as a matrix with only a single entry. From this, we
can see that a scalar is its own transpose: a a
= .
We can add matrices to each other, as long as they have the same shape, just
by adding their corresponding elements: where
C A B
= + Ci,j = Ai,j + Bi,j.
We can also add a scalar to a matrix or multiply a matrix by a scalar, just
by performing that operation on each element of a matrix: D = a · B + c where
Di,j = a B
· i,j + c.
In the context of deep learning, we also use some less conventional notation.
We allow the addition of matrix and a vector, yielding another matrix: C = A +b,
where Ci,j = Ai,j + bj. In other words, the vector b is added to each row of the
matrix. This shorthand eliminates the need to define a matrix with b copied into
each row before doing the addition. This implicit copying of b to many locations
is called .
broadcasting
2.2 Multiplying Matrices and Vectors
One of the most important operations involving matrices is multiplication of two
matrices. The matrix product of matrices A and B is a third matrix C. In
order for this product to be defined, A must have the same number of columns as
B has rows. If A is of shape m n
× and B is of shape n p
× , then C is of shape
m p
× . We can write the matrix product just by placing two or more matrices
together, e.g.
C AB
= . (2.4)
The product operation is defined by
Ci,j =

k
Ai,kBk,j. (2.5)
Note that the standard product of two matrices is just a matrix containing
not
the product of the individual elements. Such an operation exists and is called the
element-wise product Hadamard product
or , and is denoted as .
A B

The dot product between two vectors x and y of the same dimensionality
is the matrix product xy. We can think of the matrix product C = AB as
computing Ci,j as the dot product between row of and column of .
i A j B
34
CHAPTER 2. LINEAR ALGEBRA
Matrix product operations have many useful properties that make mathematical
analysis of matrices more convenient. For example, matrix multiplication is
distributive:
A B C AB AC
( + ) = + . (2.6)
It is also associative:
A BC AB C
( ) = ( ) . (2.7)
Matrix multiplication is commutative (the condition
not AB = BA does not
always hold), unlike scalar multiplication. However, the dot product between two
vectors is commutative:
x
y y
= 
x. (2.8)
The transpose of a matrix product has a simple form:
( )
AB 
= B
A
. (2.9)
This allows us to demonstrate equation , by exploiting the fact that the value
2.8
of such a product is a scalar and therefore equal to its own transpose:
x
y =

x
y

= y
x. (2.10)
Since the focus of this textbook is not linear algebra, we do not attempt to
develop a comprehensive list of useful properties of the matrix product here, but
the reader should be aware that many more exist.
We now know enough linear algebra notation to write down a system of linear
equations:
Ax b
= (2.11)
where A ∈ Rm n
×
is a known matrix, b ∈ Rm
is a known vector, and x ∈ Rn
is a
vector of unknown variables we would like to solve for. Each element xi of x is one
of these unknown variables. Each row of A and each element of b provide another
constraint. We can rewrite equation as:
2.11
A1 :
, x = b1 (2.12)
A2 :
, x = b2 (2.13)
. . . (2.14)
Am,:x = bm (2.15)
or, even more explicitly, as:
A1 1
, x1 + A1 2
, x2 + +
· · · A1,nxn = b1 (2.16)
35
CHAPTER 2. LINEAR ALGEBRA


1 0 0
0 1 0
0 0 1


Figure 2.2: Example identity matrix: This is I3.
A2 1
, x1 + A2 2
, x2 + +
· · · A2,nxn = b2 (2.17)
. . . (2.18)
Am,1x1 + Am,2x2 + +
· · · Am,nxn = bm . (2.19)
Matrix-vector product notation provides a more compact representation for
equations of this form.
2.3 Identity and Inverse Matrices
Linear algebra offers a powerful tool called matrix inversion that allows us to
analytically solve equation for many values of .
2.11 A
To describe matrix inversion, we first need to define the concept of an identity
matrix. An identity matrix is a matrix that does not change any vector when we
multiply that vector by that matrix. We denote the identity matrix that preserves
n-dimensional vectors as In. Formally, In ∈ Rn n
× , and
∀ ∈
x Rn
, Inx x
= . (2.20)
The structure of the identity matrix is simple: all of the entries along the main
diagonal are 1, while all of the other entries are zero. See figure for an example.
2.2
The matrix inverse of A is denoted as A−1, and it is defined as the matrix
such that
A−1
A I
= n. (2.21)
We can now solve equation by the following steps:
2.11
Ax b
= (2.22)
A−1
Ax A
= −1
b (2.23)
Inx A
= −1
b (2.24)
36
CHAPTER 2. LINEAR ALGEBRA
x A
= −1
b. (2.25)
Of course, this process depends on it being possible to find A−1
. We discuss
the conditions for the existence of A−1 in the following section.
When A−1 exists, several different algorithms exist for finding it in closed form.
In theory, the same inverse matrix can then be used to solve the equation many
times for different values of b. However, A −1
is primarily useful as a theoretical
tool, and should not actually be used in practice for most software applications.
Because A−1
can be represented with only limited precision on a digital computer,
algorithms that make use of the value of b can usually obtain more accurate
estimates of .
x
2.4 Linear Dependence and Span
In order for A−1 to exist, equation must have exactly one solution for every
2.11
value of b. However, it is also possible for the system of equations to have no
solutions or infinitely many solutions for some values of b. It is not possible to
have more than one but less than infinitely many solutions for a particular b; if
both and are solutions then
x y
z x y
= α + (1 )
− α (2.26)
is also a solution for any real .
α
To analyze how many solutions the equation has, we can think of the columns
of A as specifying different directions we can travel from the origin (the point
specified by the vector of all zeros), and determine how many ways there are of
reaching b. In this view, each element of x specifies how far we should travel in
each of these directions, with xi specifying how far to move in the direction of
column :
i
Ax =

i
xiA:,i. (2.27)
In general, this kind of operation is called a linear combination. Formally, a
linear combination of some set of vectors {v(1)
, . . . , v( )
n } is given by multiplying
each vector v( )
i
by a corresponding scalar coefficient and adding the results:

i
civ( )
i
. (2.28)
The span of a set of vectors is the set of all points obtainable by linear combination
of the original vectors.
37
CHAPTER 2. LINEAR ALGEBRA
Determining whether Ax = b has a solution thus amounts to testing whether b
is in the span of the columns of A. This particular span is known as the column
space range
or the of .
A
In order for the system Ax = b to have a solution for all values of b ∈ Rm,
we therefore require that the column space of A be all of Rm. If any point in R m
is excluded from the column space, that point is a potential value of b that has
no solution. The requirement that the column space of A be all of Rm
implies
immediately that A must have at least m columns, i.e., n m
≥ . Otherwise, the
dimensionality of the column space would be less than m. For example, consider a
3 × 2 matrix. The target b is 3-D, but x is only 2-D, so modifying the value of x
at best allows us to trace out a 2-D plane within R3
. The equation has a solution
if and only if lies on that plane.
b
Having n m
≥ is only a necessary condition for every point to have a solution.
It is not a sufficient condition, because it is possible for some of the columns to
be redundant. Consider a 2 ×2 matrix where both of the columns are identical.
This has the same column space as a 2 × 1 matrix containing only one copy of the
replicated column. In other words, the column space is still just a line, and fails to
encompass all of R2
, even though there are two columns.
Formally, this kind of redundancy is known as linear dependence. A set of
vectors is linearly independent if no vector in the set is a linear combination
of the other vectors. If we add a vector to a set that is a linear combination of
the other vectors in the set, the new vector does not add any points to the set’s
span. This means that for the column space of the matrix to encompass all of Rm
,
the matrix must contain at least one set of m linearly independent columns. This
condition is both necessary and sufficient for equation to have a solution for
2.11
every value of b. Note that the requirement is for a set to have exactly m linear
independent columns, not at least m. No set of m-dimensional vectors can have
more than m mutually linearly independent columns, but a matrix with more than
m columns may have more than one such set.
In order for the matrix to have an inverse, we additionally need to ensure that
equation has one solution for each value of
2.11 at most b. To do so, we need to
ensure that the matrix has at most m columns. Otherwise there is more than one
way of parametrizing each solution.
Together, this means that the matrix must be square, that is, we require that
m = n and that all of the columns must be linearly independent. A square matrix
with linearly dependent columns is known as .
singular
If A is not square or is square but singular, it can still be possible to solve the
equation. However, we can not use the method of matrix inversion to find the
38
CHAPTER 2. LINEAR ALGEBRA
solution.
So far we have discussed matrix inverses as being multiplied on the left. It is
also possible to define an inverse that is multiplied on the right:
AA−1
= I. (2.29)
For square matrices, the left inverse and right inverse are equal.
2.5 Norms
Sometimes we need to measure the size of a vector. In machine learning, we usually
measure the size of vectors using a function called a norm. Formally, the Lp norm
is given by
|| ||
x p =


i
|xi|p
1
p
(2.30)
for p , p .
∈ R ≥ 1
Norms, including the Lp norm, are functions mapping vectors to non-negative
values. On an intuitive level, the norm of a vector x measures the distance from
the origin to the point x. More rigorously, a norm is any function f that satisfies
the following properties:
• ⇒
f( ) = 0
x x = 0
• ≤
f( + )
x y f f
( ) +
x ( )
y (the triangle inequality)
• ∀ ∈ | |
α R, f α
( x) = α f( )
x
The L2
norm, with p = 2, is known as the Euclidean norm. It is simply the
Euclidean distance from the origin to the point identified by x. The L2
norm is
used so frequently in machine learning that it is often denoted simply as || ||
x , with
the subscript omitted. It is also common to measure the size of a vector using
2
the squared L2 norm, which can be calculated simply as xx.
The squared L2
norm is more convenient to work with mathematically and
computationally than the L2
norm itself. For example, the derivatives of the
squared L2
norm with respect to each element of x each depend only on the
corresponding element of x, while all of the derivatives of the L2
norm depend
on the entire vector. In many contexts, the squared L2
norm may be undesirable
because it increases very slowly near the origin. In several machine learning
39
CHAPTER 2. LINEAR ALGEBRA
applications, it is important to discriminate between elements that are exactly
zero and elements that are small but nonzero. In these cases, we turn to a function
that grows at the same rate in all locations, but retains mathematical simplicity:
the L1
norm. The L1
norm may be simplified to
|| ||
x 1 =

i
|xi |. (2.31)
The L1
norm is commonly used in machine learning when the difference between
zero and nonzero elements is very important. Every time an element of x moves
away from 0 by , the
 L1
norm increases by .

We sometimes measure the size of the vector by counting its number of nonzero
elements. Some authors refer to this function as the “L0 norm,” but this is incorrect
terminology. The number of non-zero entries in a vector is not a norm, because
scaling the vector by α does not change the number of nonzero entries. The L1
norm is often used as a substitute for the number of nonzero entries.
One other norm that commonly arises in machine learning is the L∞ norm,
also known as the max norm. This norm simplifies to the absolute value of the
element with the largest magnitude in the vector,
|| ||
x ∞ = max
i
|xi|. (2.32)
Sometimes we may also wish to measure the size of a matrix. In the context
of deep learning, the most common way to do this is with the otherwise obscure
Frobenius norm:
|| ||
A F =

i,j
A2
i,j, (2.33)
which is analogous to the L2 norm of a vector.
The dot product of two vectors can be rewritten in terms of norms. Specifically,
x
y x
= || ||2|| ||
y 2 cos θ (2.34)
where is the angle between and .
θ x y
2.6 Special Kinds of Matrices and Vectors
Some special kinds of matrices and vectors are particularly useful.
Diagonal matrices consist mostly of zeros and have non-zero entries only along
the main diagonal. Formally, a matrix D is diagonal if and only if Di,j = 0 for
40
CHAPTER 2. LINEAR ALGEBRA
all i = j. We have already seen one example of a diagonal matrix: the identity
matrix, where all of the diagonal entries are 1. We write diag(v) to denote a square
diagonal matrix whose diagonal entries are given by the entries of the vector v.
Diagonal matrices are of interest in part because multiplying by a diagonal matrix
is very computationally efficient. To compute diag(v)x, we only need to scale each
element xi by vi. In other words, diag(v)x = v x
 . Inverting a square diagonal
matrix is also efficient. The inverse exists only if every diagonal entry is nonzero,
and in that case, diag(v)−1 = diag([1/v1, . . . ,1/vn ]). In many cases, we may
derive some very general machine learning algorithm in terms of arbitrary matrices,
but obtain a less expensive (and less descriptive) algorithm by restricting some
matrices to be diagonal.
Not all diagonal matrices need be square. It is possible to construct a rectangular
diagonal matrix. Non-square diagonal matrices do not have inverses but it is still
possible to multiply by them cheaply. For a non-square diagonal matrix D, the
product Dx will involve scaling each element of x, and either concatenating some
zeros to the result if D is taller than it is wide, or discarding some of the last
elements of the vector if is wider than it is tall.
D
A matrix is any matrix that is equal to its own transpose:
symmetric
A A
= 
. (2.35)
Symmetric matrices often arise when the entries are generated by some function of
two arguments that does not depend on the order of the arguments. For example,
if A is a matrix of distance measurements, with Ai,j giving the distance from point
i to point , then
j Ai,j = Aj,i because distance functions are symmetric.
A is a vector with :
unit vector unit norm
|| ||
x 2 = 1. (2.36)
A vector x and a vector y are orthogonal to each other if x
y = 0. If both
vectors have nonzero norm, this means that they are at a 90 degree angle to each
other. In Rn , at most n vectors may be mutually orthogonal with nonzero norm.
If the vectors are not only orthogonal but also have unit norm, we call them
orthonormal.
An orthogonal matrix is a square matrix whose rows are mutually orthonor-
mal and whose columns are mutually orthonormal:
A
A AA
= 
= I. (2.37)
41
CHAPTER 2. LINEAR ALGEBRA
This implies that
A−1
= A
, (2.38)
so orthogonal matrices are of interest because their inverse is very cheap to compute.
Pay careful attention to the definition of orthogonal matrices. Counterintuitively,
their rows are not merely orthogonal but fully orthonormal. There is no special
term for a matrix whose rows or columns are orthogonal but not orthonormal.
2.7 Eigendecomposition
Many mathematical objects can be understood better by breaking them into
constituent parts, or finding some properties of them that are universal, not caused
by the way we choose to represent them.
For example, integers can be decomposed into prime factors. The way we
represent the number will change depending on whether we write it in base ten
12
or in binary, but it will always be true that 12 = 2× 2×3. From this representation
we can conclude useful properties, such as that is not divisible by , or that any
12 5
integer multiple of will be divisible by .
12 3
Much as we can discover something about the true nature of an integer by
decomposing it into prime factors, we can also decompose matrices in ways that
show us information about their functional properties that is not obvious from the
representation of the matrix as an array of elements.
One of the most widely used kinds of matrix decomposition is called eigen-
decomposition, in which we decompose a matrix into a set of eigenvectors and
eigenvalues.
An eigenvector of a square matrix A is a non-zero vector v such that multi-
plication by alters only the scale of :
A v
Av v
= λ . (2.39)
The scalar λ is known as the eigenvalue corresponding to this eigenvector. (One
can also find a left eigenvector such that vA = λv, but we are usually
concerned with right eigenvectors).
If v is an eigenvector of A, then so is any rescaled vector sv for s , s
∈ R = 0.
Moreover, sv still has the same eigenvalue. For this reason, we usually only look
for unit eigenvectors.
Suppose that a matrix A has n linearly independent eigenvectors, {v(1)
, . . . ,
v( )
n }, with corresponding eigenvalues {λ1, . . . , λn}. We may concatenate all of the
42
CHAPTER 2. LINEAR ALGEBRA
󰤓 󰤓 󰤓    

󰤓
󰤓
󰤓







 

󰤓 󰤓 󰤓    


󰤓
󰤓
󰤓








  

 


Figure 2.3: An example of the effect of eigenvectors and eigenvalues. Here, we have
a matrix A with two orthonormal eigenvectors, v(1)
with eigenvalue λ1 and v(2)
with
eigenvalue λ2. (Left)We plot the set of all unit vectors u ∈ R2
as a unit circle. (Right)We
plot the set of all points Au. By observing the way that A distorts the unit circle, we
can see that it scales space in direction v( )
i
by λi.
eigenvectors to form a matrix V with one eigenvector per column: V = [v(1), . . . ,
v( )
n ]. Likewise, we can concatenate the eigenvalues to form a vector λ = [λ1, . . . ,
λn ]. The of is then given by
eigendecomposition A
A V λ V
= diag( ) −1
. (2.40)
We have seen that constructing matrices with specific eigenvalues and eigenvec-
tors allows us to stretch space in desired directions. However, we often want to
decompose matrices into their eigenvalues and eigenvectors. Doing so can help
us to analyze certain properties of the matrix, much as decomposing an integer
into its prime factors can help us understand the behavior of that integer.
Not every matrix can be decomposed into eigenvalues and eigenvectors. In some
43
CHAPTER 2. LINEAR ALGEBRA
cases, the decomposition exists, but may involve complex rather than real numbers.
Fortunately, in this book, we usually need to decompose only a specific class of
matrices that have a simple decomposition. Specifically, every real symmetric
matrix can be decomposed into an expression using only real-valued eigenvectors
and eigenvalues:
A Q Q
= Λ 
, (2.41)
where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a
diagonal matrix. The eigenvalue Λi,i is associated with the eigenvector in column i
of Q, denoted as Q:,i. Because Q is an orthogonal matrix, we can think of A as
scaling space by λi in direction v( )
i
. See figure for an example.
2.3
While any real symmetric matrix A is guaranteed to have an eigendecomposi-
tion, the eigendecomposition may not be unique. If any two or more eigenvectors
share the same eigenvalue, then any set of orthogonal vectors lying in their span
are also eigenvectors with that eigenvalue, and we could equivalently choose a Q
using those eigenvectors instead. By convention, we usually sort the entries of Λ
in descending order. Under this convention, the eigendecomposition is unique only
if all of the eigenvalues are unique.
The eigendecomposition of a matrix tells us many useful facts about the
matrix. The matrix is singular if and only if any of the eigenvalues are zero.
The eigendecomposition of a real symmetric matrix can also be used to optimize
quadratic expressions of the form f(x) = x
Ax subject to || ||
x 2 = 1. Whenever x
is equal to an eigenvector of A, f takes on the value of the corresponding eigenvalue.
The maximum value of f within the constraint region is the maximum eigenvalue
and its minimum value within the constraint region is the minimum eigenvalue.
A matrix whose eigenvalues are all positive is called positive definite. A
matrix whose eigenvalues are all positive or zero-valued is calledpositive semidefi-
nite. Likewise, if all eigenvalues are negative, the matrix is negative definite, and
if all eigenvalues are negative or zero-valued, it is negative semidefinite. Positive
semidefinite matrices are interesting because they guarantee that ∀x x
, Ax ≥ 0.
Positive definite matrices additionally guarantee that x
Ax x
= 0 ⇒ = 0.
2.8 Singular Value Decomposition
In section , we saw how to decompose a matrix into eigenvectors and eigenvalues.
2.7
The singular value decomposition (SVD) provides another way to factorize
a matrix, into singular vectors and singular values. The SVD allows us to
discover some of the same kind of information as the eigendecomposition. However,
44
CHAPTER 2. LINEAR ALGEBRA
the SVD is more generally applicable. Every real matrix has a singular value
decomposition, but the same is not true of the eigenvalue decomposition. For
example, if a matrix is not square, the eigendecomposition is not defined, and we
must use a singular value decomposition instead.
Recall that the eigendecomposition involves analyzing a matrix A to discover
a matrix V of eigenvectors and a vector of eigenvalues λ such that we can rewrite
A as
A V λ V
= diag( ) −1
. (2.42)
The singular value decomposition is similar, except this time we will write A
as a product of three matrices:
A UDV
= 
. (2.43)
Suppose that A is an m n
× matrix. Then U is defined to be an m m
× matrix,
D V
to be an matrix, and
m n
× to be an matrix.
n n
×
Each of these matrices is defined to have a special structure. The matrices U
and V are both defined to be orthogonal matrices. The matrix D is defined to be
a diagonal matrix. Note that is not necessarily square.
D
The elements along the diagonal of D are known as the singular values of
the matrix A. The columns of U are known as the left-singular vectors. The
columns of are known as as the
V right-singular vectors.
We can actually interpret the singular value decomposition of A in terms of
the eigendecomposition of functions of A. The left-singular vectors of A are the
eigenvectors of AA. The right-singular vectors of A are the eigenvectors of A A.
The non-zero singular values of A are the square roots of the eigenvalues of A A.
The same is true for AA .
Perhaps the most useful feature of the SVD is that we can use it to partially
generalize matrix inversion to non-square matrices, as we will see in the next
section.
2.9 The Moore-Penrose Pseudoinverse
Matrix inversion is not defined for matrices that are not square. Suppose we want
to make a left-inverse of a matrix , so that we can solve a linear equation
B A
Ax y
= (2.44)
45
CHAPTER 2. LINEAR ALGEBRA
by left-multiplying each side to obtain
x By
= . (2.45)
Depending on the structure of the problem, it may not be possible to design a
unique mapping from to .
A B
If A is taller than it is wide, then it is possible for this equation to have
no solution. If A is wider than it is tall, then there could be multiple possible
solutions.
The Moore-Penrose pseudoinverse allows us to make some headway in
these cases. The pseudoinverse of is defined as a matrix
A
A+
= lim
α0
(A
A I
+ α )−1
A
. (2.46)
Practical algorithms for computing the pseudoinverse are not based on this defini-
tion, but rather the formula
A+
= V D+
U 
, (2.47)
where U, D and V are the singular value decomposition of A, and the pseudoinverse
D+ of a diagonal matrix D is obtained by taking the reciprocal of its non-zero
elements then taking the transpose of the resulting matrix.
When A has more columns than rows, then solving a linear equation using the
pseudoinverse provides one of the many possible solutions. Specifically, it provides
the solution x = A+ y with minimal Euclidean norm || ||
x 2 among all possible
solutions.
When A has more rows than columns, it is possible for there to be no solution.
In this case, using the pseudoinverse gives us the x for which Ax is as close as
possible to in terms of Euclidean norm
y || − ||
Ax y 2.
2.10 The Trace Operator
The trace operator gives the sum of all of the diagonal entries of a matrix:
Tr( ) =
A

i
Ai,i. (2.48)
The trace operator is useful for a variety of reasons. Some operations that are
difficult to specify without resorting to summation notation can be specified using
46
CHAPTER 2. LINEAR ALGEBRA
matrix products and the trace operator. For example, the trace operator provides
an alternative way of writing the Frobenius norm of a matrix:
|| ||
A F =

Tr(AA). (2.49)
Writing an expression in terms of the trace operator opens up opportunities to
manipulate the expression using many useful identities. For example, the trace
operator is invariant to the transpose operator:
Tr( ) = Tr(
A A
). (2.50)
The trace of a square matrix composed of many factors is also invariant to
moving the last factor into the first position, if the shapes of the corresponding
matrices allow the resulting product to be defined:
Tr( ) = Tr( ) = Tr( )
ABC CAB BCA (2.51)
or more generally,
Tr(
n

i=1
F( )
i
) = Tr(F( )
n
n−1

i=1
F( )
i
). (2.52)
This invariance to cyclic permutation holds even if the resulting product has a
different shape. For example, for A ∈ Rm n
×
and B ∈ Rn m
×
, we have
Tr( ) = Tr( )
AB BA (2.53)
even though AB ∈ Rm m
× and BA ∈ Rn n
× .
Another useful fact to keep in mind is that a scalar is its own trace: a = Tr(a).
2.11 The Determinant
The determinant of a square matrix, denoted det(A), is a function mapping
matrices to real scalars. The determinant is equal to the product of all the
eigenvalues of the matrix. The absolute value of the determinant can be thought
of as a measure of how much multiplication by the matrix expands or contracts
space. If the determinant is 0, then space is contracted completely along at least
one dimension, causing it to lose all of its volume. If the determinant is 1, then
the transformation preserves volume.
47
CHAPTER 2. LINEAR ALGEBRA
2.12 Example: Principal Components Analysis
One simple machine learning algorithm, principal components analysis or PCA
can be derived using only knowledge of basic linear algebra.
Suppose we have a collection of m points {x(1), . . . , x( )
m } in Rn. Suppose we
would like to apply lossy compression to these points. Lossy compression means
storing the points in a way that requires less memory but may lose some precision.
We would like to lose as little precision as possible.
One way we can encode these points is to represent a lower-dimensional version
of them. For each point x( )
i ∈ Rn we will find a corresponding code vector c( )
i ∈ Rl.
If l is smaller than n, it will take less memory to store the code points than the
original data. We will want to find some encoding function that produces the code
for an input, f(x) = c, and a decoding function that produces the reconstructed
input given its code, .
x x
≈ g f
( ( ))
PCA is defined by our choice of the decoding function. Specifically, to make the
decoder very simple, we choose to use matrix multiplication to map the code back
into Rn. Let , where
g( ) =
c Dc D ∈ Rn l
× is the matrix defining the decoding.
Computing the optimal code for this decoder could be a difficult problem. To
keep the encoding problem easy, PCA constrains the columns of D to be orthogonal
to each other. (Note that D is still not technically “an orthogonal matrix” unless
l n
= )
With the problem as described so far, many solutions are possible, because we
can increase the scale of D:,i if we decrease ci proportionally for all points. To give
the problem a unique solution, we constrain all of the columns of to have unit
D
norm.
In order to turn this basic idea into an algorithm we can implement, the first
thing we need to do is figure out how to generate the optimal code point c∗ for
each input point x. One way to do this is to minimize the distance between the
input point x and its reconstruction, g(c∗). We can measure this distance using a
norm. In the principal components algorithm, we use the L2 norm:
c∗
= arg min
c
|| − ||
x g( )
c 2. (2.54)
We can switch to the squared L2
norm instead of the L2
norm itself, because
both are minimized by the same value of c. Both are minimized by the same
value of c because the L2 norm is non-negative and the squaring operation is
48
CHAPTER 2. LINEAR ALGEBRA
monotonically increasing for non-negative arguments.
c∗
= arg min
c
|| − ||
x g( )
c 2
2. (2.55)
The function being minimized simplifies to
( ( ))
x − g c 
( ( ))
x − g c (2.56)
(by the definition of the L2
norm, equation )
2.30
= x
x x
− 
g g
( )
c − ( )
c 
x c
+ (
g )
g( )
c (2.57)
(by the distributive property)
= xx x
− 2 
g g
( ) +
c ( )
c 
g( )
c (2.58)
(because the scalar g( )
c x is equal to the transpose of itself).
We can now change the function being minimized again, to omit the first term,
since this term does not depend on :
c
c∗
= arg min
c
−2x
g g
( ) +
c ( )
c 
g .
( )
c (2.59)
To make further progress, we must substitute in the definition of :
g( )
c
c∗
= arg min
c
−2x
Dc c
+ 
D
Dc (2.60)
= arg min
c
−2x
Dc c
+ 
Il c (2.61)
(by the orthogonality and unit norm constraints on )
D
= arg min
c
−2x
Dc c
+ 
c (2.62)
We can solve this optimization problem using vector calculus (see section if
4.3
you do not know how to do this):
∇c( 2
− x
Dc c
+ 
c) = 0 (2.63)
− 2D
x c
+ 2 = 0 (2.64)
c D
= 
x. (2.65)
49
CHAPTER 2. LINEAR ALGEBRA
This makes the algorithm efficient: we can optimally encode x just using a
matrix-vector operation. To encode a vector, we apply the encoder function
f( ) =
x D
x. (2.66)
Using a further matrix multiplication, we can also define the PCA reconstruction
operation:
r g f
( ) =
x ( ( )) =
x DD
x. (2.67)
Next, we need to choose the encoding matrix D. To do so, we revisit the idea
of minimizing the L2 distance between inputs and reconstructions. Since we will
use the same matrix D to decode all of the points, we can no longer consider the
points in isolation. Instead, we must minimize the Frobenius norm of the matrix
of errors computed over all dimensions and all points:
D∗
= arg min
D


i,j

x
( )
i
j − r(x( )
i )j
2
subject to D
D I
= l (2.68)
To derive the algorithm for finding D∗, we will start by considering the case
where l = 1. In this case, D is just a single vector, d. Substituting equation 2.67
into equation and simplifying into , the problem reduces to
2.68 D d
d∗
= arg min
d

i
||x( )
i
− dd
x( )
i
||2
2 subject to || ||
d 2 = 1. (2.69)
The above formulation is the most direct way of performing the substitution,
but is not the most stylistically pleasing way to write the equation. It places the
scalar value dx( )
i on the right of the vector d. It is more conventional to write
scalar coefficients on the left of vector they operate on. We therefore usually write
such a formula as
d∗
= arg min
d

i
||x( )
i
− d
x( )
i
d||2
2 subject to || ||
d 2 = 1, (2.70)
or, exploiting the fact that a scalar is its own transpose, as
d∗
= arg min
d

i
||x( )
i
− x( )
i 
dd||2
2 subject to || ||
d 2 = 1. (2.71)
The reader should aim to become familiar with such cosmetic rearrangements.
50
CHAPTER 2. LINEAR ALGEBRA
At this point, it can be helpful to rewrite the problem in terms of a single
design matrix of examples, rather than as a sum over separate example vectors.
This will allow us to use more compact notation. Let X ∈ Rm n
×
be the matrix
defined by stacking all of the vectors describing the points, such that Xi,: = x( )
i 
.
We can now rewrite the problem as
d∗
= arg min
d
|| −
X Xdd
||2
F subject to d
d = 1. (2.72)
Disregarding the constraint for the moment, we can simplify the Frobenius norm
portion as follows:
arg min
d
|| −
X Xdd
||2
F (2.73)
= arg min
d
Tr

X Xdd
− 
 
X Xdd
− 

(2.74)
(by equation )
2.49
= arg min
d
Tr(X
X X
− 
Xdd
− dd
X
X dd
+ 
X
Xdd
) (2.75)
= arg min
d
Tr(X
X) Tr(
− X
Xdd
) Tr(
− dd
X 
X) + Tr(dd
X
Xdd
)
(2.76)
= arg min
d
− Tr(X
Xdd
) Tr(
− dd
X
X) + Tr(dd
X
Xdd
) (2.77)
(because terms not involving do not affect the )
d arg min
= arg min
d
−2 Tr(X
Xdd
) + Tr(dd
X
Xdd
) (2.78)
(because we can cycle the order of the matrices inside a trace, equation )
2.52
= arg min
d
−2 Tr(X
Xdd
) + Tr(X
Xdd
dd
) (2.79)
(using the same property again)
At this point, we re-introduce the constraint:
arg min
d
−2 Tr(X
Xdd
) + Tr(X
Xdd
dd
) subject to d
d = 1 (2.80)
= arg min
d
−2 Tr(X
Xdd
) + Tr(X
Xdd 
) subject to d
d = 1 (2.81)
(due to the constraint)
= arg min
d
− Tr(X
Xdd
) subject to d
d = 1 (2.82)
51
CHAPTER 2. LINEAR ALGEBRA
= arg max
d
Tr(X
Xdd
) subject to d
d = 1 (2.83)
= arg max
d
Tr(d
X
Xd d
) subject to 
d = 1 (2.84)
This optimization problem may be solved using eigendecomposition. Specifically,
the optimal d is given by the eigenvector of X
X corresponding to the largest
eigenvalue.
This derivation is specific to the case of l = 1 and recovers only the first
principal component. More generally, when we wish to recover a basis of principal
components, the matrix D is given by the l eigenvectors corresponding to the
largest eigenvalues. This may be shown using proof by induction. We recommend
writing this proof as an exercise.
Linear algebra is one of the fundamental mathematical disciplines that is
necessary to understand deep learning. Another key area of mathematics that is
ubiquitous in machine learning is probability theory, presented next.
52
Chapter 3
Probability and Information
Theory
In this chapter, we describe probability theory and information theory.
Probability theory is a mathematical framework for representing uncertain
statements. It provides a means of quantifying uncertainty and axioms for deriving
new uncertain statements. In artificial intelligence applications, we use probability
theory in two major ways. First, the laws of probability tell us how AI systems
should reason, so we design our algorithms to compute or approximate various
expressions derived using probability theory. Second, we can use probability and
statistics to theoretically analyze the behavior of proposed AI systems.
Probability theory is a fundamental tool of many disciplines of science and
engineering. We provide this chapter to ensure that readers whose background is
primarily in software engineering with limited exposure to probability theory can
understand the material in this book.
While probability theory allows us to make uncertain statements and reason in
the presence of uncertainty, information theory allows us to quantify the amount
of uncertainty in a probability distribution.
If you are already familiar with probability theory and information theory, you
may wish to skip all of this chapter except for section , which describes the
3.14
graphs we use to describe structured probabilistic models for machine learning. If
you have absolutely no prior experience with these subjects, this chapter should
be sufficient to successfully carry out deep learning research projects, but we do
suggest that you consult an additional resource, such as Jaynes 2003
( ).
53
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.1 Why Probability?
Many branches of computer science deal mostly with entities that are entirely
deterministic and certain. A programmer can usually safely assume that a CPU will
execute each machine instruction flawlessly. Errors in hardware do occur, but are
rare enough that most software applications do not need to be designed to account
for them. Given that many computer scientists and software engineers work in a
relatively clean and certain environment, it can be surprising that machine learning
makes heavy use of probability theory.
This is because machine learning must always deal with uncertain quantities,
and sometimes may also need to deal with stochastic (non-deterministic) quantities.
Uncertainty and stochasticity can arise from many sources. Researchers have made
compelling arguments for quantifying uncertainty using probability since at least
the 1980s. Many of the arguments presented here are summarized from or inspired
by Pearl 1988
( ).
Nearly all activities require some ability to reason in the presence of uncertainty.
In fact, beyond mathematical statements that are true by definition, it is difficult
to think of any proposition that is absolutely true or any event that is absolutely
guaranteed to occur.
There are three possible sources of uncertainty:
1. Inherent stochasticity in the system being modeled. For example, most
interpretations of quantum mechanics describe the dynamics of subatomic
particles as being probabilistic. We can also create theoretical scenarios that
we postulate to have random dynamics, such as a hypothetical card game
where we assume that the cards are truly shuffled into a random order.
2. Incomplete observability. Even deterministic systems can appear stochastic
when we cannot observe all of the variables that drive the behavior of the
system. For example, in the Monty Hall problem, a game show contestant is
asked to choose between three doors and wins a prize held behind the chosen
door. Two doors lead to a goat while a third leads to a car. The outcome
given the contestant’s choice is deterministic, but from the contestant’s point
of view, the outcome is uncertain.
3. Incomplete modeling. When we use a model that must discard some of
the information we have observed, the discarded information results in
uncertainty in the model’s predictions. For example, suppose we build a
robot that can exactly observe the location of every object around it. If the
54
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
robot discretizes space when predicting the future location of these objects,
then the discretization makes the robot immediately become uncertain about
the precise position of objects: each object could be anywhere within the
discrete cell that it was observed to occupy.
In many cases, it is more practical to use a simple but uncertain rule rather
than a complex but certain one, even if the true rule is deterministic and our
modeling system has the fidelity to accommodate a complex rule. For example, the
simple rule “Most birds fly” is cheap to develop and is broadly useful, while a rule
of the form, “Birds fly, except for very young birds that have not yet learned to
fly, sick or injured birds that have lost the ability to fly, flightless species of birds
including the cassowary, ostrich and kiwi. . .” is expensive to develop, maintain and
communicate, and after all of this effort is still very brittle and prone to failure.
While it should be clear that we need a means of representing and reasoning
about uncertainty, it is not immediately obvious that probability theory can provide
all of the tools we want for artificial intelligence applications. Probability theory
was originally developed to analyze the frequencies of events. It is easy to see
how probability theory can be used to study events like drawing a certain hand of
cards in a game of poker. These kinds of events are often repeatable. When we
say that an outcome has a probability p of occurring, it means that if we repeated
the experiment (e.g., draw a hand of cards) infinitely many times, then proportion
p of the repetitions would result in that outcome. This kind of reasoning does not
seem immediately applicable to propositions that are not repeatable. If a doctor
analyzes a patient and says that the patient has a 40% chance of having the flu,
this means something very different—we can not make infinitely many replicas of
the patient, nor is there any reason to believe that different replicas of the patient
would present with the same symptoms yet have varying underlying conditions. In
the case of the doctor diagnosing the patient, we use probability to represent a
degree of belief, with 1 indicating absolute certainty that the patient has the flu
and 0 indicating absolute certainty that the patient does not have the flu. The
former kind of probability, related directly to the rates at which events occur, is
known as frequentist probability, while the latter, related to qualitative levels
of certainty, is known as Bayesian probability.
If we list several properties that we expect common sense reasoning about
uncertainty to have, then the only way to satisfy those properties is to treat
Bayesian probabilities as behaving exactly the same as frequentist probabilities.
For example, if we want to compute the probability that a player will win a poker
game given that she has a certain set of cards, we use exactly the same formulas
as when we compute the probability that a patient has a disease given that she
55
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
has certain symptoms. For more details about why a small set of common sense
assumptions implies that the same axioms must control both kinds of probability,
see ( ).
Ramsey 1926
Probability can be seen as the extension of logic to deal with uncertainty. Logic
provides a set of formal rules for determining what propositions are implied to
be true or false given the assumption that some other set of propositions is true
or false. Probability theory provides a set of formal rules for determining the
likelihood of a proposition being true given the likelihood of other propositions.
3.2 Random Variables
A random variable is a variable that can take on different values randomly. We
typically denote the random variable itself with a lower case letter in plain typeface,
and the values it can take on with lower case script letters. For example, x1 and x2
are both possible values that the random variable x can take on. For vector-valued
variables, we would write the random variable as x and one of its values as x. On
its own, a random variable is just a description of the states that are possible; it
must be coupled with a probability distribution that specifies how likely each of
these states are.
Random variables may be discrete or continuous. A discrete random variable
is one that has a finite or countably infinite number of states. Note that these
states are not necessarily the integers; they can also just be named states that
are not considered to have any numerical value. A continuous random variable is
associated with a real value.
3.3 Probability Distributions
A probability distribution is a description of how likely a random variable or
set of random variables is to take on each of its possible states. The way we
describe probability distributions depends on whether the variables are discrete or
continuous.
3.3.1 Discrete Variables and Probability Mass Functions
A probability distribution over discrete variables may be described using a proba-
bility mass function (PMF). We typically denote probability mass functions with
a capital P. Often we associate each random variable with a different probability
56
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
mass function and the reader must infer which probability mass function to use
based on the identity of the random variable, rather than the name of the function;
P P
( )
x is usually not the same as ( )
y .
The probability mass function maps from a state of a random variable to
the probability of that random variable taking on that state. The probability
that x = x is denoted as P(x), with a probability of 1 indicating that x = x is
certain and a probability of 0 indicating that x = x is impossible. Sometimes
to disambiguate which PMF to use, we write the name of the random variable
explicitly: P (x = x). Sometimes we define a variable first, then use ∼ notation to
specify which distribution it follows later: x x .
∼ P ( )
Probability mass functions can act on many variables at the same time. Such
a probability distribution over many variables is known as a joint probability
distribution. P (x = x, y = y) denotes the probability that x = x and y = y
simultaneously. We may also write for brevity.
P x,y
( )
To be a probability mass function on a random variable x, a function P must
satisfy the following properties:
• The domain of must be the set of all possible states of x.
P
• ∀ ∈
x x,0 ≤ P(x) ≤ 1. An impossible event has probability and no state can
0
be less probable than that. Likewise, an event that is guaranteed to happen
has probability , and no state can have a greater chance of occurring.
1
•

x∈x P(x) = 1. We refer to this property as being normalized. Without
this property, we could obtain probabilities greater than one by computing
the probability of one of many events occurring.
For example, consider a single discrete random variable x with k different
states. We can place a uniform distribution on x—that is, make each of its
states equally likely—by setting its probability mass function to
P x
( =
x i) =
1
k
(3.1)
for all i. We can see that this fits the requirements for a probability mass function.
The value 1
k is positive because is a positive integer. We also see that
k

i
P x
( =
x i) =

i
1
k
=
k
k
= 1, (3.2)
so the distribution is properly normalized.
57
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.3.2 Continuous Variables and Probability Density Functions
When working with continuous random variables, we describe probability distri-
butions using a probability density function (PDF) rather than a probability
mass function. To be a probability density function, a function p must satisfy the
following properties:
• The domain of must be the set of all possible states of x.
p
• ∀ ∈ ≥ ≤
x x,p x
( ) 0 ( )
. p
Note that we do not require x 1.
•

p x dx
( ) = 1.
A probability density function p(x) does not give the probability of a specific
state directly, instead the probability of landing inside an infinitesimal region with
volume is given by .
δx p x δx
( )
We can integrate the density function to find the actual probability mass of a
set of points. Specifically, the probability that x lies in some set S is given by the
integral of p(x) over that set. In the univariate example, the probability that x
lies in the interval is given by
[ ]
a, b

[ ]
a,b p x dx
( ) .
For an example of a probability density function corresponding to a specific
probability density over a continuous random variable, consider a uniform distribu-
tion on an interval of the real numbers. We can do this with a function u(x;a,b),
where a and b are the endpoints of the interval, with b > a. The “;” notation means
“parametrized by”; we consider x to be the argument of the function, while a and
b are parameters that define the function. To ensure that there is no probability
mass outside the interval, we say u(x;a,b) = 0 for all x ∈ [a,b] [
. Within a,b],
u x a, b
( ; ) = 1
b a
− . We can see that this is nonnegative everywhere. Additionally, it
integrates to 1. We often denote that x follows the uniform distribution on [a,b]
by writing x .
∼ U a,b
( )
3.4 Marginal Probability
Sometimes we know the probability distribution over a set of variables and we want
to know the probability distribution over just a subset of them. The probability
distribution over the subset is known as the distribution.
marginal probability
For example, suppose we have discrete random variables x and y, and we know
P ,
(x y . We can find x with the :
) P( ) sum rule
∀ ∈
x x x
,P ( = ) =
x

y
P x, y .
( =
x y = ) (3.3)
58
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
The name “marginal probability” comes from the process of computing marginal
probabilities on paper. When the values of P(x y
, ) are written in a grid with
different values of x in rows and different values of y in columns, it is natural to
sum across a row of the grid, then write P(x) in the margin of the paper just to
the right of the row.
For continuous variables, we need to use integration instead of summation:
p x
( ) =

p x,y dy.
( ) (3.4)
3.5 Conditional Probability
In many cases, we are interested in the probability of some event, given that some
other event has happened. This is called a conditional probability. We denote
the conditional probability that y = y given x = x as P(y = y | x = x). This
conditional probability can be computed with the formula
P y x
( =
y | x = ) =
P y, x
( =
y x = )
P x
( =
x )
. (3.5)
The conditional probability is only defined when P(x = x) > 0. We cannot compute
the conditional probability conditioned on an event that never happens.
It is important not to confuse conditional probability with computing what
would happen if some action were undertaken. The conditional probability that
a person is from Germany given that they speak German is quite high, but if
a randomly selected person is taught to speak German, their country of origin
does not change. Computing the consequences of an action is called making an
intervention query. Intervention queries are the domain of causal modeling,
which we do not explore in this book.
3.6 The Chain Rule of Conditional Probabilities
Any joint probability distribution over many random variables may be decomposed
into conditional distributions over only one variable:
P (x(1)
,. .. ,x( )
n
) = (
P x(1)
)Πn
i=2P (x( )
i
| x(1)
,. .. ,x( 1)
i−
). (3.6)
This observation is known as the chain rule or product rule of probability.
It follows immediately from the definition of conditional probability in equation .
3.5
59
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
For example, applying the definition twice, we get
P , , P , P ,
(a b c) = (a b
| c) (b c)
P , P P
(b c) = ( )
b c
| ( )
c
P , , P , P P .
(a b c) = (a b
| c) ( )
b c
| ( )
c
3.7 Independence and Conditional Independence
Two random variables x and y are independent if their probability distribution
can be expressed as a product of two factors, one involving only x and one involving
only y:
∀ ∈ ∈
x x,y y x y x y (3.7)
, p( = x, = ) = (
y p = ) (
x p = )
y .
Two random variables x and y areconditionally independent given a random
variable z if the conditional probability distribution over x and y factorizes in this
way for every value of z:
∀ ∈ ∈ ∈ | | |
x x,y y,z z x y
, p( = x, = y z x
= ) = (
z p = x z y
= ) (
z p = y z = )
z .
(3.8)
We can denote independence and conditional independence with compact
notation: x y
⊥ means that x and y are independent, while x y z
⊥ | means that x
and y are conditionally independent given z.
3.8 Expectation, Variance and Covariance
The expectation or expected value of some function f(x) with respect to a
probability distribution P (x) is the average or mean value that f takes on when x
is drawn from . For discrete variables this can be computed with a summation:
P
Ex∼P [ ( )] =
f x

x
P x f x ,
( ) ( ) (3.9)
while for continuous variables, it is computed with an integral:
Ex∼p[ ( )] =
f x

p x f x dx.
( ) ( ) (3.10)
60
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
When the identity of the distribution is clear from the context, we may simply
write the name of the random variable that the expectation is over, as in Ex[f(x)].
If it is clear which random variable the expectation is over, we may omit the
subscript entirely, as in E[f(x)]. By default, we can assume that E[·] averages over
the values of all the random variables inside the brackets. Likewise, when there is
no ambiguity, we may omit the square brackets.
Expectations are linear, for example,
Ex[ ( ) + ( )] =
αf x βg x αEx[ ( )] +
f x βEx[ ( )]
g x , (3.11)
when and are not dependent on .
α β x
The variance gives a measure of how much the values of a function of a random
variable x vary as we sample different values of x from its probability distribution:
Var( ( )) =
f x E

( ( ) [ ( )])
f x − E f x 2

. (3.12)
When the variance is low, the values of f(x) cluster near their expected value. The
square root of the variance is known as the .
standard deviation
The covariance gives some sense of how much two values are linearly related
to each other, as well as the scale of these variables:
Cov( ( ) ( )) = [( ( ) [ ( )])( ( ) [ ( )])]
f x , g y E f x − E f x g y − E g y . (3.13)
High absolute values of the covariance mean that the values change very much
and are both far from their respective means at the same time. If the sign of the
covariance is positive, then both variables tend to take on relatively high values
simultaneously. If the sign of the covariance is negative, then one variable tends to
take on a relatively high value at the times that the other takes on a relatively
low value and vice versa. Other measures such as correlation normalize the
contribution of each variable in order to measure only how much the variables are
related, rather than also being affected by the scale of the separate variables.
The notions of covariance and dependence are related, but are in fact distinct
concepts. They are related because two variables that are independent have zero
covariance, and two variables that have non-zero covariance are dependent. How-
ever, independence is a distinct property from covariance. For two variables to have
zero covariance, there must be no linear dependence between them. Independence
is a stronger requirement than zero covariance, because independence also excludes
nonlinear relationships. It is possible for two variables to be dependent but have
zero covariance. For example, suppose we first sample a real number x from a
uniform distribution over the interval [−1, 1]. We next sample a random variable
61
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
s. With probability 1
2 , we choose the value of s to be . Otherwise, we choose
1
the value of s to be −1. We can then generate a random variable y by assigning
y = sx. Clearly, x and y are not independent, because x completely determines
the magnitude of . However,
y Cov( ) = 0
x, y .
The covariance matrix of a random vector x ∈ Rn is an n n
× matrix, such
that
Cov( )
x i,j = Cov(xi,xj). (3.14)
The diagonal elements of the covariance give the variance:
Cov(xi,xi) = Var(xi ). (3.15)
3.9 Common Probability Distributions
Several simple probability distributions are useful in many contexts in machine
learning.
3.9.1 Bernoulli Distribution
The Bernoulli distribution is a distribution over a single binary random variable.
It is controlled by a single parameter φ ∈ [0,1], which gives the probability of the
random variable being equal to 1. It has the following properties:
P φ
( = 1) =
x (3.16)
P φ
( = 0) = 1
x − (3.17)
P x φ
( =
x ) = x
(1 )
− φ 1−x
(3.18)
Ex[ ] =
x φ (3.19)
Varx( ) = (1 )
x φ − φ (3.20)
3.9.2 Multinoulli Distribution
The multinoulli or categorical distribution is a distribution over a single discrete
variable with k different states, where k is finite.1
The multinoulli distribution is
1
“Multinoulli” is a term that was recently coined by Gustavo Lacerdo and popularized by
Murphy 2012
( ). The multinoulli distribution is a special case of the multinomial distribution.
A multinomial distribution is the distribution over vectors in {0,. . ., n}k
representing how many
times each of the k categories is visited when n samples are drawn from a multinoulli distribution.
Many texts use the term “multinomial” to refer to multinoulli distributions without clarifying
that they refer only to the case.
n = 1
62
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
parametrized by a vector p ∈ [0,1]k−1
, where pi gives the probability of the i-th
state. The final, k-th state’s probability is given by 1− 1
p. Note that we must
constrain 1
p ≤ 1. Multinoulli distributions are often used to refer to distributions
over categories of objects, so we do not usually assume that state 1 has numerical
value 1, etc. For this reason, we do not usually need to compute the expectation
or variance of multinoulli-distributed random variables.
The Bernoulli and multinoulli distributions are sufficient to describe any distri-
bution over their domain. They are able to describe any distribution over their
domain not so much because they are particularly powerful but rather because
their domain is simple; they model discrete variables for which it is feasible to
enumerate all of the states. When dealing with continuous variables, there are
uncountably many states, so any distribution described by a small number of
parameters must impose strict limits on the distribution.
3.9.3 Gaussian Distribution
The most commonly used distribution over real numbers is the normal distribu-
tion, also known as the :
Gaussian distribution
N ( ;
x µ, σ2
) =

1
2πσ2
exp

−
1
2σ2
( )
x µ
− 2

. (3.21)
See figure for a plot of the density function.
3.1
The two parameters µ ∈ R and σ ∈ (0,∞) control the normal distribution.
The parameter µ gives the coordinate of the central peak. This is also the mean of
the distribution: E[x] = µ. The standard deviation of the distribution is given by
σ, and the variance by σ2.
When we evaluate the PDF, we need to square and invert σ. When we need to
frequently evaluate the PDF with different parameter values, a more efficient way
of parametrizing the distribution is to use a parameter β ∈ (0,∞) to control the
precision or inverse variance of the distribution:
N( ;
x µ, β−1
) =

β
2π
exp

−
1
2
β x µ
( − )2

. (3.22)
Normal distributions are a sensible choice for many applications. In the absence
of prior knowledge about what form a distribution over the real numbers should
take, the normal distribution is a good default choice for two major reasons.
63
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
− − − −
2 0
. 1 5
. 1 0
. 0 5 0 0 0 5 1 0 1 5 2 0
. . . . . .
0 00
.
0 05
.
0 10
.
0 15
.
0 20
.
0 25
.
0 30
.
0 35
.
0 40
.
p(x)
Maximum at =
x µ
Inflection points at
x µ σ
= ±
Figure 3.1: The normal distribution: The normal distribution N (x; µ, σ2
) exhibits
a classic “bell curve” shape, with the x coordinate of its central peak given by µ, and
the width of its peak controlled by σ. In this example, we depict the standard normal
distribution, with and .
µ = 0 σ = 1
First, many distributions we wish to model are truly close to being normal
distributions. The central limit theorem shows that the sum of many indepen-
dent random variables is approximately normally distributed. This means that
in practice, many complicated systems can be modeled successfully as normally
distributed noise, even if the system can be decomposed into parts with more
structured behavior.
Second, out of all possible probability distributions with the same variance,
the normal distribution encodes the maximum amount of uncertainty over the
real numbers. We can thus think of the normal distribution as being the one
that inserts the least amount of prior knowledge into a model. Fully developing
and justifying this idea requires more mathematical tools, and is postponed to
section .
19.4.2
The normal distribution generalizes to Rn, in which case it is known as the
multivariate normal distribution. It may be parametrized with a positive
definite symmetric matrix :
Σ
N ( ; ) =
x µ, Σ

1
(2 )
π ndet( )
Σ
exp

−
1
2
( )
x µ
− 
Σ−1
( )
x µ
−

. (3.23)
64
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
The parameter µ still gives the mean of the distribution, though now it is
vector-valued. The parameter Σ gives the covariance matrix of the distribution.
As in the univariate case, when we wish to evaluate the PDF several times for
many different values of the parameters, the covariance is not a computationally
efficient way to parametrize the distribution, since we need to invert Σ to evaluate
the PDF. We can instead use a :
precision matrix β
N ( ;
x µ β
, −1
) =

det( )
β
(2 )
π n
exp

−
1
2
( )
x µ
− 
β x µ
( − )

. (3.24)
We often fix the covariance matrix to be a diagonal matrix. An even simpler
version is the isotropic Gaussian distribution, whose covariance matrix is a scalar
times the identity matrix.
3.9.4 Exponential and Laplace Distributions
In the context of deep learning, we often want to have a probability distribution
with a sharp point at x = 0. To accomplish this, we can use the exponential
distribution:
p x λ λ
( ; ) = 1x≥0 exp ( )
−λx . (3.25)
The exponential distribution uses the indicator function 1x≥0 to assign probability
zero to all negative values of .
x
A closely related probability distribution that allows us to place a sharp peak
of probability mass at an arbitrary point is the
µ Laplace distribution
Laplace( ; ) =
x µ, γ
1
2γ
exp

−
| − |
x µ
γ

. (3.26)
3.9.5 The Dirac Distribution and Empirical Distribution
In some cases, we wish to specify that all of the mass in a probability distribution
clusters around a single point. This can be accomplished by defining a PDF using
the Dirac delta function, :
δ x
( )
p x δ x µ .
( ) = ( − ) (3.27)
The Dirac delta function is defined such that it is zero-valued everywhere except
0, yet integrates to 1. The Dirac delta function is not an ordinary function that
associates each value x with a real-valued output, instead it is a different kind of
65
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
mathematical object called a generalized function that is defined in terms of its
properties when integrated. We can think of the Dirac delta function as being the
limit point of a series of functions that put less and less mass on all points other
than zero.
By defining p(x) to be δ shifted by −µ we obtain an infinitely narrow and
infinitely high peak of probability mass where .
x µ
=
A common use of the Dirac delta distribution is as a component of an empirical
distribution,
p̂( ) =
x
1
m
m

i=1
δ(x x
− ( )
i
) (3.28)
which puts probability mass 1
m on each of the m points x(1)
,. .. ,x( )
m
forming a
given dataset or collection of samples. The Dirac delta distribution is only necessary
to define the empirical distribution over continuous variables. For discrete variables,
the situation is simpler: an empirical distribution can be conceptualized as a
multinoulli distribution, with a probability associated to each possible input value
that is simply equal to the empirical frequency of that value in the training set.
We can view the empirical distribution formed from a dataset of training
examples as specifying the distribution that we sample from when we train a model
on this dataset. Another important perspective on the empirical distribution is
that it is the probability density that maximizes the likelihood of the training data
(see section ).
5.5
3.9.6 Mixtures of Distributions
It is also common to define probability distributions by combining other simpler
probability distributions. One common way of combining distributions is to
construct a mixture distribution. A mixture distribution is made up of several
component distributions. On each trial, the choice of which component distribution
generates the sample is determined by sampling a component identity from a
multinoulli distribution:
P( ) =
x

i
P i P i
( =
c ) ( =
x c
| ) (3.29)
where c is the multinoulli distribution over component identities.
P( )
We have already seen one example of a mixture distribution: the empirical
distribution over real-valued variables is a mixture distribution with one Dirac
component for each training example.
66
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
The mixture model is one simple strategy for combining probability distributions
to create a richer distribution. In chapter , we explore the art of building complex
16
probability distributions from simple ones in more detail.
The mixture model allows us to briefly glimpse a concept that will be of
paramount importance later—the latent variable. A latent variable is a random
variable that we cannot observe directly. The component identity variable c of the
mixture model provides an example. Latent variables may be related to x through
the joint distribution, in this case, P(x c
, ) = P(x c
| )P (c). The distribution P (c)
over the latent variable and the distribution P(x c
| ) relating the latent variables
to the visible variables determines the shape of the distribution P (x) even though
it is possible to describe P(x) without reference to the latent variable. Latent
variables are discussed further in section .
16.5
A very powerful and common type of mixture model is the Gaussian mixture
model, in which the components p(x | c = i) are Gaussians. Each component has
a separately parametrized mean µ( )
i
and covariance Σ( )
i
. Some mixtures can have
more constraints. For example, the covariances could be shared across components
via the constraint Σ( )
i
= Σ, i
∀ . As with a single Gaussian distribution, the mixture
of Gaussians might constrain the covariance matrix for each component to be
diagonal or isotropic.
In addition to the means and covariances, the parameters of a Gaussian mixture
specify the prior probability αi = P(c = i) given to each component i. The word
“prior” indicates that it expresses the model’s beliefs about c before it has observed
x. By comparison, P(c | x) is a posterior probability, because it is computed
after observation of x. A Gaussian mixture model is a universal approximator
of densities, in the sense that any smooth density can be approximated with any
specific, non-zero amount of error by a Gaussian mixture model with enough
components.
Figure shows samples from a Gaussian mixture model.
3.2
3.10 Useful Properties of Common Functions
Certain functions arise often while working with probability distributions, especially
the probability distributions used in deep learning models.
One of these functions is the :
logistic sigmoid
σ x
( ) =
1
1 + exp( )
−x
. (3.30)
The logistic sigmoid is commonly used to produce the φ parameter of a Bernoulli
67
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
x1
x
2
Figure 3.2: Samples from a Gaussian mixture model. In this example, there are three
components. From left to right, the first component has an isotropic covariance matrix,
meaning it has the same amount of variance in each direction. The second has a diagonal
covariance matrix, meaning it can control the variance separately along each axis-aligned
direction. This example has more variance along thex2 axis than along the x1 axis. The
third component has a full-rank covariance matrix, allowing it to control the variance
separately along an arbitrary basis of directions.
distribution because its range is (0,1), which lies within the valid range of values
for the φ parameter. See figure for a graph of the sigmoid function. The
3.3
sigmoid function saturates when its argument is very positive or very negative,
meaning that the function becomes very flat and insensitive to small changes in its
input.
Another commonly encountered function is the softplus function ( ,
Dugas et al.
2001):
ζ x x .
( ) = log (1 + exp( )) (3.31)
The softplus function can be useful for producing the β or σ parameter of a normal
distribution because its range is (0,∞). It also arises commonly when manipulating
expressions involving sigmoids. The name of the softplus function comes from the
fact that it is a smoothed or “softened” version of
x+
= max(0 )
,x . (3.32)
See figure for a graph of the softplus function.
3.4
The following properties are all useful enough that you may wish to memorize
them:
68
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
− −
10 5 0 5 10
0 0
.
0 2
.
0 4
.
0 6
.
0 8
.
1 0
.
σ
x
(
)
Figure 3.3: The logistic sigmoid function.
− −
10 5 0 5 10
0
2
4
6
8
10
ζ
x
(
)
Figure 3.4: The softplus function.
69
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
σ x
( ) =
exp( )
x
exp( ) + exp(0)
x
(3.33)
d
dx
σ x σ x σ x
( ) = ( )(1 − ( )) (3.34)
1 ( ) = ( )
− σ x σ −x (3.35)
log ( ) = ( )
σ x −ζ −x (3.36)
d
dx
ζ x σ x
( ) = ( ) (3.37)
∀ ∈
x (0 1)
, , σ−1
( ) = log
x

x
1 − x

(3.38)
∀x > , ζ
0 −1
( ) = log (exp( ) 1)
x x − (3.39)
ζ x
( ) =
 x
−∞
σ y dy
( ) (3.40)
ζ x ζ x x
( ) − (− ) = (3.41)
The function σ−1
(x) is called the logit in statistics, but this term is more rarely
used in machine learning.
Equation provides extra justification for the name “softplus.” The softplus
3.41
function is intended as a smoothed version of the positive part function, x+ =
max{0,x}. The positive part function is the counterpart of the negative part
function, x− = max{0, x
− }. To obtain a smooth function that is analogous to the
negative part, one can use ζ(−x). Just as x can be recovered from its positive part
and negative part via the identity x+ − x−
= x, it is also possible to recover x
using the same relationship between and , as shown in equation .
ζ x
( ) ζ x
(− ) 3.41
3.11 Bayes’ Rule
We often find ourselves in a situation where we know P(y x
| ) and need to know
P (x y
| ). Fortunately, if we also know P (x), we can compute the desired quantity
using Bayes’ rule:
P( ) =
x y
|
P P
( )
x ( )
y x
|
P( )
y
. (3.42)
Note that while P (y) appears in the formula, it is usually feasible to compute
P( ) =
y

x P x P x P
(y | ) ( ), so we do not need to begin with knowledge of ( )
y .
70
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
Bayes’ rule is straightforward to derive from the definition of conditional
probability, but it is useful to know the name of this formula since many texts
refer to it by name. It is named after the Reverend Thomas Bayes, who first
discovered a special case of the formula. The general version presented here was
independently discovered by Pierre-Simon Laplace.
3.12 Technical Details of Continuous Variables
A proper formal understanding of continuous random variables and probability
density functions requires developing probability theory in terms of a branch of
mathematics known as measure theory. Measure theory is beyond the scope of
this textbook, but we can briefly sketch some of the issues that measure theory is
employed to resolve.
In section , we saw that the probability of a continuous vector-valued
3.3.2 x
lying in some set S is given by the integral of p(x) over the set S. Some choices
of set S can produce paradoxes. For example, it is possible to construct two sets
S1 and S2 such that p(x ∈ S1) + p(x ∈ S2) > 1 but S1 ∩ S2 = ∅. These sets
are generally constructed making very heavy use of the infinite precision of real
numbers, for example by making fractal-shaped sets or sets that are defined by
transforming the set of rational numbers.2 One of the key contributions of measure
theory is to provide a characterization of the set of sets that we can compute the
probability of without encountering paradoxes. In this book, we only integrate
over sets with relatively simple descriptions, so this aspect of measure theory never
becomes a relevant concern.
For our purposes, measure theory is more useful for describing theorems that
apply to most points in Rn but do not apply to some corner cases. Measure theory
provides a rigorous way of describing that a set of points is negligibly small. Such
a set is said to have measure zero. We do not formally define this concept in this
textbook. For our purposes, it is sufficient to understand the intuition that a set
of measure zero occupies no volume in the space we are measuring. For example,
within R2
, a line has measure zero, while a filled polygon has positive measure.
Likewise, an individual point has measure zero. Any union of countably many sets
that each have measure zero also has measure zero (so the set of all the rational
numbers has measure zero, for instance).
Another useful term from measure theory is almost everywhere. A property
that holds almost everywhere holds throughout all of space except for on a set of
2
The Banach-Tarski theorem provides a fun example of such sets.
71
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
measure zero. Because the exceptions occupy a negligible amount of space, they
can be safely ignored for many applications. Some important results in probability
theory hold for all discrete values but only hold “almost everywhere” for continuous
values.
Another technical detail of continuous variables relates to handling continuous
random variables that are deterministic functions of one another. Suppose we have
two random variables, x and y, such that y = g(x), where g is an invertible, con-
tinuous, differentiable transformation. One might expect that py (y) = px(g−1
(y)).
This is actually not the case.
As a simple example, suppose we have scalar random variables x and y. Suppose
y = x
2 and x ∼ U(0, 1). If we use the rule py(y) = px (2y) then py will be 0
everywhere except the interval [0, 1
2] 1
, and it will be on this interval. This means

py( ) =
y dy
1
2
, (3.43)
which violates the definition of a probability distribution. This is a common mistake.
The problem with this approach is that it fails to account for the distortion of
space introduced by the function g. Recall that the probability of x lying in an
infinitesimally small region with volume δx is given by p(x)δx. Since g can expand
or contract space, the infinitesimal volume surrounding x in x space may have
different volume in space.
y
To see how to correct the problem, we return to the scalar case. We need to
preserve the property
|py( ( )) =
g x dy| |px( )
x dx .
| (3.44)
Solving from this, we obtain
py( ) =
y px (g−1
( ))
y




∂x
∂y



 (3.45)
or equivalently
px( ) =
x py( ( ))
g x




∂g x
( )
∂x



. (3.46)
In higher dimensions, the derivative generalizes to the determinant of the Jacobian
matrix—the matrix with Ji,j = ∂xi
∂yj
. Thus, for real-valued vectors and ,
x y
px( ) =
x py( ( ))
g x



det

∂g( )
x
∂x



. (3.47)
72
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.13 Information Theory
Information theory is a branch of applied mathematics that revolves around
quantifying how much information is present in a signal. It was originally invented
to study sending messages from discrete alphabets over a noisy channel, such as
communication via radio transmission. In this context, information theory tells how
to design optimal codes and calculate the expected length of messages sampled from
specific probability distributions using various encoding schemes. In the context of
machine learning, we can also apply information theory to continuous variables
where some of these message length interpretations do not apply. This field is
fundamental to many areas of electrical engineering and computer science. In this
textbook, we mostly use a few key ideas from information theory to characterize
probability distributions or quantify similarity between probability distributions.
For more detail on information theory, see Cover and Thomas 2006 MacKay
( ) or
( ).
2003
The basic intuition behind information theory is that learning that an unlikely
event has occurred is more informative than learning that a likely event has
occurred. A message saying “the sun rose this morning” is so uninformative as
to be unnecessary to send, but a message saying “there was a solar eclipse this
morning” is very informative.
We would like to quantify information in a way that formalizes this intuition.
Specifically,
• Likely events should have low information content, and in the extreme case,
events that are guaranteed to happen should have no information content
whatsoever.
• Less likely events should have higher information content.
• Independent events should have additive information. For example, finding
out that a tossed coin has come up as heads twice should convey twice as
much information as finding out that a tossed coin has come up as heads
once.
In order to satisfy all three of these properties, we define the self-information
of an event x to be
= x
I x P x .
( ) = log
− ( ) (3.48)
In this book, we always use log to mean the natural logarithm, with base e. Our
definition of I (x) is therefore written in units of nats. One nat is the amount of
73
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
information gained by observing an event of probability 1
e. Other texts use base-2
logarithms and units called bits or shannons; information measured in bits is
just a rescaling of information measured in nats.
When x is continuous, we use the same definition of information by analogy,
but some of the properties from the discrete case are lost. For example, an event
with unit density still has zero information, despite not being an event that is
guaranteed to occur.
Self-information deals only with a single outcome. We can quantify the amount
of uncertainty in an entire probability distribution using the Shannon entropy:
H( ) =
x Ex∼P [ ( )] =
I x −Ex∼P [log ( )]
P x . (3.49)
also denoted H(P). In other words, the Shannon entropy of a distribution is the
expected amount of information in an event drawn from that distribution. It gives
a lower bound on the number of bits (if the logarithm is base 2, otherwise the units
are different) needed on average to encode symbols drawn from a distribution P.
Distributions that are nearly deterministic (where the outcome is nearly certain)
have low entropy; distributions that are closer to uniform have high entropy. See
figure for a demonstration. When
3.5 x is continuous, the Shannon entropy is
known as the differential entropy.
If we have two separate probability distributions P (x) and Q(x) over the same
random variable x, we can measure how different these two distributions are using
the Kullback-Leibler (KL) divergence:
DKL( ) =
P Q
 Ex∼P

log
P x
( )
Q x
( )

= Ex∼P [log ( ) log ( )]
P x − Q x . (3.50)
In the case of discrete variables, it is the extra amount of information (measured
in bits if we use the base logarithm, but in machine learning we usually use nats
2
and the natural logarithm) needed to send a message containing symbols drawn
from probability distribution P, when we use a code that was designed to minimize
the length of messages drawn from probability distribution .
Q
The KL divergence has many useful properties, most notably that it is non-
negative. The KL divergence is 0 if and only if P and Q are the same distribution in
the case of discrete variables, or equal “almost everywhere” in the case of continuous
variables. Because the KL divergence is non-negative and measures the difference
between two distributions, it is often conceptualized as measuring some sort of
distance between these distributions. However, it is not a true distance measure
because it is not symmetric: DKL(P Q
 ) = DKL(Q P
 ) for some P and Q. This
74
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
0 0 0 2 0 4 0 6 0 8 1 0
. . . . . .
0 0
.
0 1
.
0 2
.
0 3
.
0 4
.
0 5
.
0 6
.
0 7
.
Shannon
entropy
in
nats
Figure 3.5: This plot shows how distributions that are closer to deterministic have low
Shannon entropy while distributions that are close to uniform have high Shannon entropy.
On the horizontal axis, we plotp, the probability of a binary random variable being equal
to . The entropy is given by
1 (p− 1)log(1 −p)−p p
log . When p is near 0, the distribution
is nearly deterministic, because the random variable is nearly always 0. Whenp is near 1,
the distribution is nearly deterministic, because the random variable is nearly always 1.
When p = 0.5, the entropy is maximal, because the distribution is uniform over the two
outcomes.
asymmetry means that there are important consequences to the choice of whether
to use DKL( )
P Q
 or DKL( )
Q P
 . See figure for more detail.
3.6
A quantity that is closely related to the KL divergence is the cross-entropy
H(P,Q) = H(P ) + DKL (P Q
 ), which is similar to the KL divergence but lacking
the term on the left:
H P, Q
( ) = −Ex∼P log ( )
Q x . (3.51)
Minimizing the cross-entropy with respect to Q is equivalent to minimizing the
KL divergence, because does not participate in the omitted term.
Q
When computing many of these quantities, it is common to encounter expres-
sions of the form 0 log 0. By convention, in the context of information theory, we
treat these expressions as limx→0 x x
log = 0.
3.14 Structured Probabilistic Models
Machine learning algorithms often involve probability distributions over a very
large number of random variables. Often, these probability distributions involve
direct interactions between relatively few variables. Using a single function to
75
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
x
Probability
Density
q∗
= argminqDKL( )
p q

p x
( )
q∗
( )
x
x
Probability
Density
q∗
= argminqDKL( )
q p

p( )
x
q∗
( )
x
Figure 3.6: The KL divergence is asymmetric. Suppose we have a distributionp(x) and
wish to approximate it with another distribution q(x). We have the choice of minimizing
either DKL(p q
 ) or DKL(q p
 ). We illustrate the effect of this choice using a mixture of
two Gaussians for p, and a single Gaussian for q. The choice of which direction of the
KL divergence to use is problem-dependent. Some applications require an approximation
that usually places high probability anywhere that the true distribution places high
probability, while other applications require an approximation that rarely places high
probability anywhere that the true distribution places low probability. The choice of the
direction of the KL divergence reflects which of these considerations takes priority for each
application. (Left)The effect of minimizing DKL(p q
 ). In this case, we select a q that has
high probability where p has high probability. When p has multiple modes, q chooses to
blur the modes together, in order to put high probability mass on all of them. (Right)The
effect of minimizing DKL(q p
 ). In this case, we select a q that has low probability where
p has low probability. When p has multiple modes that are sufficiently widely separated,
as in this figure, the KL divergence is minimized by choosing a single mode, in order to
avoid putting probability mass in the low-probability areas between modes ofp. Here, we
illustrate the outcome when q is chosen to emphasize the left mode. We could also have
achieved an equal value of the KL divergence by choosing the right mode. If the modes
are not separated by a sufficiently strong low probability region, then this direction of the
KL divergence can still choose to blur the modes.
76
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
describe the entire joint probability distribution can be very inefficient (both
computationally and statistically).
Instead of using a single function to represent a probability distribution, we
can split a probability distribution into many factors that we multiply together.
For example, suppose we have three random variables: a, b and c. Suppose that
a influences the value of b and b influences the value of c, but that a and c are
independent given b. We can represent the probability distribution over all three
variables as a product of probability distributions over two variables:
p , , p p p .
(a b c) = ( )
a ( )
b a
| ( )
c b
| (3.52)
These factorizations can greatly reduce the number of parameters needed
to describe the distribution. Each factor uses a number of parameters that is
exponential in the number of variables in the factor. This means that we can greatly
reduce the cost of representing a distribution if we are able to find a factorization
into distributions over fewer variables.
We can describe these kinds of factorizations using graphs. Here we use the word
“graph” in the sense of graph theory: a set of vertices that may be connected to each
other with edges. When we represent the factorization of a probability distribution
with a graph, we call it a structured probabilistic model or graphical model.
There are two main kinds of structured probabilistic models: directed and
undirected. Both kinds of graphical models use a graph G in which each node
in the graph corresponds to a random variable, and an edge connecting two
random variables means that the probability distribution is able to represent direct
interactions between those two random variables.
Directed models use graphs with directed edges, and they represent fac-
torizations into conditional probability distributions, as in the example above.
Specifically, a directed model contains one factor for every random variable xi in
the distribution, and that factor consists of the conditional distribution over xi
given the parents of xi, denoted PaG(xi):
p( ) =
x

i
p (xi | PaG (xi)). (3.53)
See figure for an example of a directed graph and the factorization of probability
3.7
distributions it represents.
Undirected models use graphs with undirected edges, and they represent
factorizations into a set of functions; unlike in the directed case, these functions
77
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
a
a
c
c
b
b
e
e
d
d
Figure 3.7: A directed graphical model over random variables a, b, c, d and e. This graph
corresponds to probability distributions that can be factored as
p , , , , p p p , p p .
(a b c d e) = ( )
a ( )
b a
| (c a
| b) ( )
d b
| ( )
e c
| (3.54)
This graph allows us to quickly see some properties of the distribution. For example,a
and c interact directly, but a and e interact only indirectly via c.
are usually not probability distributions of any kind. Any set of nodes that are all
connected to each other in G is called a clique. Each clique C( )
i
in an undirected
model is associated with a factor φ( )
i
(C( )
i
). These factors are just functions, not
probability distributions. The output of each factor must be non-negative, but
there is no constraint that the factor must sum or integrate to 1 like a probability
distribution.
The probability of a configuration of random variables is proportional to the
product of all of these factors—assignments that result in larger factor values are
more likely. Of course, there is no guarantee that this product will sum to 1. We
therefore divide by a normalizing constant Z, defined to be the sum or integral
over all states of the product of the φ functions, in order to obtain a normalized
probability distribution:
p( ) =
x
1
Z

i
φ( )
i

C( )
i

. (3.55)
See figure for an example of an undirected graph and the factorization of
3.8
probability distributions it represents.
Keep in mind that these graphical representations of factorizations are a
language for describing probability distributions. They are not mutually exclusive
families of probability distributions. Being directed or undirected is not a property
of a probability distribution; it is a property of a particular description of a
78
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
a
a
c
c
b
b
e
e
d
d
Figure 3.8: An undirected graphical model over random variablesa, b, c, d and e. This
graph corresponds to probability distributions that can be factored as
p , , , ,
(a b c d e) =
1
Z
φ(1)
( )
a b c
, , φ(2)
( )
b d
, φ(3)
( )
c e
, . (3.56)
This graph allows us to quickly see some properties of the distribution. For example,a
and c interact directly, but a and e interact only indirectly via c.
probability distribution, but any probability distribution may be described in both
ways.
Throughout parts and of this book, we will use structured probabilistic
I II
models merely as a language to describe which direct probabilistic relationships
different machine learning algorithms choose to represent. No further understanding
of structured probabilistic models is needed until the discussion of research topics,
in part , where we will explore structured probabilistic models in much greater
III
detail.
This chapter has reviewed the basic concepts of probability theory that are
most relevant to deep learning. One more set of fundamental mathematical tools
remains: numerical methods.
79
Chapter 4
Numerical Computation
Machine learning algorithms usually require a high amount of numerical compu-
tation. This typically refers to algorithms that solve mathematical problems by
methods that update estimates of the solution via an iterative process, rather than
analytically deriving a formula providing a symbolic expression for the correct so-
lution. Common operations include optimization (finding the value of an argument
that minimizes or maximizes a function) and solving systems of linear equations.
Even just evaluating a mathematical function on a digital computer can be difficult
when the function involves real numbers, which cannot be represented precisely
using a finite amount of memory.
4.1 Overflow and Underflow
The fundamental difficulty in performing continuous math on a digital computer
is that we need to represent infinitely many real numbers with a finite number
of bit patterns. This means that for almost all real numbers, we incur some
approximation error when we represent the number in the computer. In many
cases, this is just rounding error. Rounding error is problematic, especially when
it compounds across many operations, and can cause algorithms that work in
theory to fail in practice if they are not designed to minimize the accumulation of
rounding error.
One form of rounding error that is particularly devastating is underflow.
Underflow occurs when numbers near zero are rounded to zero. Many functions
behave qualitatively differently when their argument is zero rather than a small
positive number. For example, we usually want to avoid division by zero (some
80
CHAPTER 4. NUMERICAL COMPUTATION
software environments will raise exceptions when this occurs, others will return a
result with a placeholder not-a-number value) or taking the logarithm of zero (this
is usually treated as −∞, which then becomes not-a-number if it is used for many
further arithmetic operations).
Another highly damaging form of numerical error is overflow. Overflow occurs
when numbers with large magnitude are approximated as ∞ or −∞. Further
arithmetic will usually change these infinite values into not-a-number values.
One example of a function that must be stabilized against underflow and
overflow is the softmax function. The softmax function is often used to predict the
probabilities associated with a multinoulli distribution. The softmax function is
defined to be
softmax( )
x i =
exp(xi)
n
j=1 exp(xj)
. (4.1)
Consider what happens when all of the xi are equal to some constant c. Analytically,
we can see that all of the outputs should be equal to 1
n. Numerically, this may
not occur when c has large magnitude. If c is very negative, then exp(c) will
underflow. This means the denominator of the softmax will become 0, so the final
result is undefined. When c is very large and positive, exp(c) will overflow, again
resulting in the expression as a whole being undefined. Both of these difficulties
can be resolved by instead evaluating softmax(z) where z = x − maxi xi. Simple
algebra shows that the value of the softmax function is not changed analytically by
adding or subtracting a scalar from the input vector. Subtracting maxi xi results
in the largest argument to exp being 0, which rules out the possibility of overflow.
Likewise, at least one term in the denominator has a value of 1, which rules out
the possibility of underflow in the denominator leading to a division by zero.
There is still one small problem. Underflow in the numerator can still cause
the expression as a whole to evaluate to zero. This means that if we implement
log softmax(x) by first running the softmax subroutine then passing the result to
the log function, we could erroneously obtain −∞. Instead, we must implement
a separate function that calculates log softmax in a numerically stable way. The
log softmax function can be stabilized using the same trick as we used to stabilize
the function.
softmax
For the most part, we do not explicitly detail all of the numerical considerations
involved in implementing the various algorithms described in this book. Developers
of low-level libraries should keep numerical issues in mind when implementing
deep learning algorithms. Most readers of this book can simply rely on low-
level libraries that provide stable implementations. In some cases, it is possible
to implement a new algorithm and have the new implementation automatically
81
CHAPTER 4. NUMERICAL COMPUTATION
stabilized. Theano ( , ; , ) is an example
Bergstra et al. 2010 Bastien et al. 2012
of a software package that automatically detects and stabilizes many common
numerically unstable expressions that arise in the context of deep learning.
4.2 Poor Conditioning
Conditioning refers to how rapidly a function changes with respect to small changes
in its inputs. Functions that change rapidly when their inputs are perturbed slightly
can be problematic for scientific computation because rounding errors in the inputs
can result in large changes in the output.
Consider the function f(x) = A−1
x. When A ∈ Rn n
×
has an eigenvalue
decomposition, its condition number is
max
i,j




λi
λj



. (4.2)
This is the ratio of the magnitude of the largest and smallest eigenvalue. When
this number is large, matrix inversion is particularly sensitive to error in the input.
This sensitivity is an intrinsic property of the matrix itself, not the result
of rounding error during matrix inversion. Poorly conditioned matrices amplify
pre-existing errors when we multiply by the true matrix inverse. In practice, the
error will be compounded further by numerical errors in the inversion process itself.
4.3 Gradient-Based Optimization
Most deep learning algorithms involve optimization of some sort. Optimization
refers to the task of either minimizing or maximizing some function f(x) by altering
x. We usually phrase most optimization problems in terms of minimizing f(x).
Maximization may be accomplished via a minimization algorithm by minimizing
−f( )
x .
The function we want to minimize or maximize is called the objective func-
tion or criterion. When we are minimizing it, we may also call it the cost
function, loss function, or error function. In this book, we use these terms
interchangeably, though some machine learning publications assign special meaning
to some of these terms.
We often denote the value that minimizes or maximizes a function with a
superscript . For example, we might say
∗ x∗
= arg min ( )
f x .
82
CHAPTER 4. NUMERICAL COMPUTATION
− − − −
2 0
. 1 5
. 1 0
. 0 5 0 0 0 5 1 0 1 5 2 0
. . . . . .
x
−2 0
.
−1 5
.
−1 0
.
−0 5
.
0 0
.
0 5
.
1 0
.
1 5
.
2 0
.
Global minimum at = 0.
x
Since f( ) = 0, gradient
x
descent halts here.
For 0, we have
x < f( ) 0,
x <
so we can decrease by
f
moving rightward.
For 0, we have
x > f ( ) 0,
x >
so we can decrease by
f
moving leftward.
f x
( ) = 1
2
x2
f
( ) =
x x
Figure 4.1: An illustration of how the gradient descent algorithm uses the derivatives of a
function can be used to follow the function downhill to a minimum.
We assume the reader is already familiar with calculus, but provide a brief
review of how calculus concepts relate to optimization here.
Suppose we have a function y = f(x), where both x and y are real numbers.
The derivative of this function is denoted as f(x) or as dy
dx. The derivative f (x)
gives the slope of f (x) at the point x. In other words, it specifies how to scale
a small change in the input in order to obtain the corresponding change in the
output: f x  f x f
( + ) ≈ ( ) + ( )
x .
The derivative is therefore useful for minimizing a function because it tells
us how to change x in order to make a small improvement in y. For example,
we know that f(x 
− sign(f(x))) is less than f(x) for small enough . We can
thus reduce f(x) by moving x in small steps with opposite sign of the derivative.
This technique is called gradient descent (Cauchy 1847
, ). See figure for an
4.1
example of this technique.
When f
(x) = 0, the derivative provides no information about which direction
to move. Points where f
(x) = 0 are known as critical points or stationary
points. A local minimum is a point where f(x) is lower than at all neighboring
points, so it is no longer possible to decrease f(x) by making infinitesimal steps.
A local maximum is a point where f(x) is higher than at all neighboring points,
83
CHAPTER 4. NUMERICAL COMPUTATION
Minimum Maximum Saddle point
Figure 4.2: Examples of each of the three types of critical points in 1-D. A critical point is
a point with zero slope. Such a point can either be a local minimum, which is lower than
the neighboring points, a local maximum, which is higher than the neighboring points, or
a saddle point, which has neighbors that are both higher and lower than the point itself.
so it is not possible to increase f (x) by making infinitesimal steps. Some critical
points are neither maxima nor minima. These are known as saddle points. See
figure for examples of each type of critical point.
4.2
A point that obtains the absolute lowest value of f(x) is a global minimum.
It is possible for there to be only one global minimum or multiple global minima of
the function. It is also possible for there to be local minima that are not globally
optimal. In the context of deep learning, we optimize functions that may have
many local minima that are not optimal, and many saddle points surrounded by
very flat regions. All of this makes optimization very difficult, especially when the
input to the function is multidimensional. We therefore usually settle for finding a
value of f that is very low, but not necessarily minimal in any formal sense. See
figure for an example.
4.3
We often minimize functions that have multiple inputs: f : Rn → R. For the
concept of “minimization” to make sense, there must still be only one (scalar)
output.
For functions with multiple inputs, we must make use of the concept of partial
derivatives. The partial derivative ∂
∂xi
f(x) measures how f changes as only the
variable xi increases at point x. The gradient generalizes the notion of derivative
to the case where the derivative is with respect to a vector: the gradient of f is the
vector containing all of the partial derivatives, denoted ∇xf(x). Element i of the
gradient is the partial derivative of f with respect to xi. In multiple dimensions,
84
CHAPTER 4. NUMERICAL COMPUTATION
x
f
x
(
)
Ideally, we would like
to arrive at the global
minimum, but this
might not be possible.
This local minimum
performs nearly as well as
the global one,
so it is an acceptable
halting point.
This local minimum performs
poorly and should be avoided.
Figure 4.3: Optimization algorithms may fail to find a global minimum when there are
multiple local minima or plateaus present. In the context of deep learning, we generally
accept such solutions even though they are not truly minimal, so long as they correspond
to significantly low values of the cost function.
critical points are points where every element of the gradient is equal to zero.
The directional derivative in direction (a unit vector) is the slope of the
u
function f in direction u. In other words, the directional derivative is the derivative
of the function f(x + αu) with respect to α, evaluated at α = 0. Using the chain
rule, we can see that ∂
∂αf α
( +
x u) evaluates to u∇xf α
( )
x when = 0.
To minimize f, we would like to find the direction in which f decreases the
fastest. We can do this using the directional derivative:
min
u u
, u=1
u
∇xf( )
x (4.3)
= min
u u
, u=1
|| ||
u 2||∇xf( )
x ||2 cos θ (4.4)
where θ is the angle between u and the gradient. Substituting in || ||
u 2 = 1 and
ignoring factors that do not depend on u, this simplifies to minu cos θ. This is
minimized when u points in the opposite direction as the gradient. In other
words, the gradient points directly uphill, and the negative gradient points directly
downhill. We can decrease f by moving in the direction of the negative gradient.
This is known as the or .
method of steepest descent gradient descent
Steepest descent proposes a new point
x = x − ∇
 xf( )
x (4.5)
85
CHAPTER 4. NUMERICAL COMPUTATION
where  is the learning rate, a positive scalar determining the size of the step.
We can choose  in several different ways. A popular approach is to set  to a small
constant. Sometimes, we can solve for the step size that makes the directional
derivative vanish. Another approach is to evaluate f 
(x − ∇xf( ))
x for several
values of  and choose the one that results in the smallest objective function value.
This last strategy is called a line search.
Steepest descent converges when every element of the gradient is zero (or, in
practice, very close to zero). In some cases, we may be able to avoid running this
iterative algorithm, and just jump directly to the critical point by solving the
equation ∇xf( ) = 0
x for .
x
Although gradient descent is limited to optimization in continuous spaces, the
general concept of repeatedly making a small move (that is approximately the best
small move) towards better configurations can be generalized to discrete spaces.
Ascending an objective function of discrete parameters is called hill climbing
( , ).
Russel and Norvig 2003
4.3.1 Beyond the Gradient: Jacobian and Hessian Matrices
Sometimes we need to find all of the partial derivatives of a function whose input
and output are both vectors. The matrix containing all such partial derivatives is
known as a Jacobian matrix. Specifically, if we have a function f : Rm → Rn,
then the Jacobian matrix J ∈ Rn m
× of is defined such that
f Ji,j = ∂
∂xj
f( )
x i.
We are also sometimes interested in a derivative of a derivative. This is known
as a second derivative. For example, for a function f : Rn → R, the derivative
with respect to xi of the derivative of f with respect to xj is denoted as ∂2
∂xi ∂xj
f.
In a single dimension, we can denote d2
dx2 f by f 
(x). The second derivative tells
us how the first derivative will change as we vary the input. This is important
because it tells us whether a gradient step will cause as much of an improvement
as we would expect based on the gradient alone. We can think of the second
derivative as measuring curvature. Suppose we have a quadratic function (many
functions that arise in practice are not quadratic but can be approximated well
as quadratic, at least locally). If such a function has a second derivative of zero,
then there is no curvature. It is a perfectly flat line, and its value can be predicted
using only the gradient. If the gradient is , then we can make a step of size
1 
along the negative gradient, and the cost function will decrease by . If the second
derivative is negative, the function curves downward, so the cost function will
actually decrease by more than . Finally, if the second derivative is positive, the
function curves upward, so the cost function can decrease by less than . See
86
CHAPTER 4. NUMERICAL COMPUTATION
x
f
x
(
)
Negative curvature
x
f
x
(
)
No curvature
x
f
x
(
)
Positive curvature
Figure 4.4: The second derivative determines the curvature of a function. Here we show
quadratic functions with various curvature. The dashed line indicates the value of the cost
function we would expect based on the gradient information alone as we make a gradient
step downhill. In the case of negative curvature, the cost function actually decreases faster
than the gradient predicts. In the case of no curvature, the gradient predicts the decrease
correctly. In the case of positive curvature, the function decreases slower than expected
and eventually begins to increase, so steps that are too large can actually increase the
function inadvertently.
figure to see how different forms of curvature affect the relationship between
4.4
the value of the cost function predicted by the gradient and the true value.
When our function has multiple input dimensions, there are many second
derivatives. These derivatives can be collected together into a matrix called the
Hessian matrix. The Hessian matrix is defined such that
H x
( )(
f )
H x
( )(
f )i,j =
∂2
∂xi∂xj
f .
( )
x (4.6)
Equivalently, the Hessian is the Jacobian of the gradient.
Anywhere that the second partial derivatives are continuous, the differential
operators are commutative, i.e. their order can be swapped:
∂2
∂xi∂xj
f( ) =
x
∂2
∂xj∂xi
f .
( )
x (4.7)
This implies that Hi,j = Hj,i, so the Hessian matrix is symmetric at such points.
Most of the functions we encounter in the context of deep learning have a symmetric
Hessian almost everywhere. Because the Hessian matrix is real and symmetric,
we can decompose it into a set of real eigenvalues and an orthogonal basis of
87
CHAPTER 4. NUMERICAL COMPUTATION
eigenvectors. The second derivative in a specific direction represented by a unit
vector d is given by d
Hd. When d is an eigenvector of H , the second derivative
in that direction is given by the corresponding eigenvalue. For other directions of
d, the directional second derivative is a weighted average of all of the eigenvalues,
with weights between 0 and 1, and eigenvectors that have smaller angle with d
receiving more weight. The maximum eigenvalue determines the maximum second
derivative and the minimum eigenvalue determines the minimum second derivative.
The (directional) second derivative tells us how well we can expect a gradient
descent step to perform. We can make a second-order Taylor series approximation
to the function around the current point
f( )
x x(0)
:
f f
( )
x ≈ (x(0)
) + (x x
− (0)
)
g +
1
2
(x x
− (0)
)
H x x
( − (0)
). (4.8)
where g is the gradient and H is the Hessian at x(0). If we use a learning rate
of , then the new point x will be given by x(0) − g. Substituting this into our
approximation, we obtain
f(x(0)
− ≈
g) f(x(0)
) − g
g +
1
2
2
g
Hg. (4.9)
There are three terms here: the original value of the function, the expected
improvement due to the slope of the function, and the correction we must apply
to account for the curvature of the function. When this last term is too large, the
gradient descent step can actually move uphill. When gHg is zero or negative,
the Taylor series approximation predicts that increasing  forever will decrease f
forever. In practice, the Taylor series is unlikely to remain accurate for large , so
one must resort to more heuristic choices of  in this case. When gHg is positive,
solving for the optimal step size that decreases the Taylor series approximation of
the function the most yields
∗
=
gg
g Hg
. (4.10)
In the worst case, when g aligns with the eigenvector of H corresponding to the
maximal eigenvalue λmax, then this optimal step size is given by 1
λmax
. To the
extent that the function we minimize can be approximated well by a quadratic
function, the eigenvalues of the Hessian thus determine the scale of the learning
rate.
The second derivative can be used to determine whether a critical point is
a local maximum, a local minimum, or saddle point. Recall that on a critical
point, f
(x) = 0. When the second derivative f
(x) > 0, the first derivative f
(x)
increases as we move to the right and decreases as we move to the left. This means
88
CHAPTER 4. NUMERICAL COMPUTATION
f
(x 
− ) < 0 and f 
(x + ) > 0 for small enough . In other words, as we move
right, the slope begins to point uphill to the right, and as we move left, the slope
begins to point uphill to the left. Thus, when f
(x) = 0 and f
(x) > 0, we can
conclude that x is a local minimum. Similarly, when f
(x) = 0 and f 
(x) < 0, we
can conclude that x is a local maximum. This is known as the second derivative
test. Unfortunately, when f (x) = 0, the test is inconclusive. In this case x may
be a saddle point, or a part of a flat region.
In multiple dimensions, we need to examine all of the second derivatives of the
function. Using the eigendecomposition of the Hessian matrix, we can generalize
the second derivative test to multiple dimensions. At a critical point, where
∇xf(x) = 0, we can examine the eigenvalues of the Hessian to determine whether
the critical point is a local maximum, local minimum, or saddle point. When the
Hessian is positive definite (all its eigenvalues are positive), the point is a local
minimum. This can be seen by observing that the directional second derivative
in any direction must be positive, and making reference to the univariate second
derivative test. Likewise, when the Hessian is negative definite (all its eigenvalues
are negative), the point is a local maximum. In multiple dimensions, it is actually
possible to find positive evidence of saddle points in some cases. When at least
one eigenvalue is positive and at least one eigenvalue is negative, we know that
x is a local maximum on one cross section of f but a local minimum on another
cross section. See figure for an example. Finally, the multidimensional second
4.5
derivative test can be inconclusive, just like the univariate version. The test is
inconclusive whenever all of the non-zero eigenvalues have the same sign, but at
least one eigenvalue is zero. This is because the univariate second derivative test is
inconclusive in the cross section corresponding to the zero eigenvalue.
In multiple dimensions, there is a different second derivative for each direction
at a single point. The condition number of the Hessian at this point measures
how much the second derivatives differ from each other. When the Hessian has a
poor condition number, gradient descent performs poorly. This is because in one
direction, the derivative increases rapidly, while in another direction, it increases
slowly. Gradient descent is unaware of this change in the derivative so it does not
know that it needs to explore preferentially in the direction where the derivative
remains negative for longer. It also makes it difficult to choose a good step size.
The step size must be small enough to avoid overshooting the minimum and going
uphill in directions with strong positive curvature. This usually means that the
step size is too small to make significant progress in other directions with less
curvature. See figure for an example.
4.6
This issue can be resolved by using information from the Hessian matrix to guide
89
CHAPTER 4. NUMERICAL COMPUTATION

󰤓



󰤓









󰤓


Figure 4.5: A saddle point containing both positive and negative curvature. The function
in this example is f (x) = x2
1 − x2
2. Along the axis corresponding to x1, the function
curves upward. This axis is an eigenvector of the Hessian and has a positive eigenvalue.
Along the axis corresponding to x2, the function curves downward. This direction is an
eigenvector of the Hessian with negative eigenvalue. The name “saddle point” derives from
the saddle-like shape of this function. This is the quintessential example of a function
with a saddle point. In more than one dimension, it is not necessary to have an eigenvalue
of 0 in order to get a saddle point: it is only necessary to have both positive and negative
eigenvalues. We can think of a saddle point with both signs of eigenvalues as being a local
maximum within one cross section and a local minimum within another cross section.
90
CHAPTER 4. NUMERICAL COMPUTATION
− − −
30 20 10 0 10 20
x1
−30
−20
−10
0
10
20
x
2
Figure 4.6: Gradient descent fails to exploit the curvature information contained in the
Hessian matrix. Here we use gradient descent to minimize a quadratic functionf(x) whose
Hessian matrix has condition number 5. This means that the direction of most curvature
has five times more curvature than the direction of least curvature. In this case, the most
curvature is in the direction [1, 1] and the least curvature is in the direction [1, −1]. The
red lines indicate the path followed by gradient descent. This very elongated quadratic
function resembles a long canyon. Gradient descent wastes time repeatedly descending
canyon walls, because they are the steepest feature. Because the step size is somewhat
too large, it has a tendency to overshoot the bottom of the function and thus needs to
descend the opposite canyon wall on the next iteration. The large positive eigenvalue
of the Hessian corresponding to the eigenvector pointed in this direction indicates that
this directional derivative is rapidly increasing, so an optimization algorithm based on
the Hessian could predict that the steepest direction is not actually a promising search
direction in this context.
91
CHAPTER 4. NUMERICAL COMPUTATION
the search. The simplest method for doing so is known as Newton’s method.
Newton’s method is based on using a second-order Taylor series expansion to
approximate near some point
f( )
x x(0)
:
f f
( )
x ≈ (x(0)
)+(x x
− (0)
)
∇xf(x(0)
)+
1
2
(x x
− (0)
)
H x
( )(
f (0)
)(x x
− (0)
). (4.11)
If we then solve for the critical point of this function, we obtain:
x∗
= x(0)
− H x
( )(
f (0)
)−1
∇xf(x(0)
). (4.12)
When f is a positive definite quadratic function, Newton’s method consists of
applying equation once to jump to the minimum of the function directly.
4.12
When f is not truly quadratic but can be locally approximated as a positive
definite quadratic, Newton’s method consists of applying equation multiple
4.12
times. Iteratively updating the approximation and jumping to the minimum of
the approximation can reach the critical point much faster than gradient descent
would. This is a useful property near a local minimum, but it can be a harmful
property near a saddle point. As discussed in section , Newton’s method is
8.2.3
only appropriate when the nearby critical point is a minimum (all the eigenvalues
of the Hessian are positive), whereas gradient descent is not attracted to saddle
points unless the gradient points toward them.
Optimization algorithms that use only the gradient, such as gradient descent,
are called first-order optimization algorithms. Optimization algorithms that
also use the Hessian matrix, such as Newton’s method, are called second-order
optimization algorithms (Nocedal and Wright 2006
, ).
The optimization algorithms employed in most contexts in this book are
applicable to a wide variety of functions, but come with almost no guarantees.
Deep learning algorithms tend to lack guarantees because the family of functions
used in deep learning is quite complicated. In many other fields, the dominant
approach to optimization is to design optimization algorithms for a limited family
of functions.
In the context of deep learning, we sometimes gain some guarantees by restrict-
ing ourselves to functions that are either Lipschitz continuous or have Lipschitz
continuous derivatives. A Lipschitz continuous function is a function f whose rate
of change is bounded by a Lipschitz constant L:
∀ ∀ | − | ≤ L|| − ||
x, y, f( )
x f( )
y x y 2. (4.13)
This property is useful because it allows us to quantify our assumption that a
small change in the input made by an algorithm such as gradient descent will have
92
CHAPTER 4. NUMERICAL COMPUTATION
a small change in the output. Lipschitz continuity is also a fairly weak constraint,
and many optimization problems in deep learning can be made Lipschitz continuous
with relatively minor modifications.
Perhaps the most successful field of specialized optimization is convex op-
timization. Convex optimization algorithms are able to provide many more
guarantees by making stronger restrictions. Convex optimization algorithms are
applicable only to convex functions—functions for which the Hessian is positive
semidefinite everywhere. Such functions are well-behaved because they lack saddle
points and all of their local minima are necessarily global minima. However, most
problems in deep learning are difficult to express in terms of convex optimization.
Convex optimization is used only as a subroutine of some deep learning algorithms.
Ideas from the analysis of convex optimization algorithms can be useful for proving
the convergence of deep learning algorithms. However, in general, the importance
of convex optimization is greatly diminished in the context of deep learning. For
more information about convex optimization, see Boyd and Vandenberghe 2004
( )
or Rockafellar 1997
( ).
4.4 Constrained Optimization
Sometimes we wish not only to maximize or minimize a function f(x) over all
possible values of x. Instead we may wish to find the maximal or minimal
value of f(x) for values of x in some set S. This is known as constrained
optimization. Points x that lie within the set S are called feasible points in
constrained optimization terminology.
We often wish to find a solution that is small in some sense. A common
approach in such situations is to impose a norm constraint, such as .
|| || ≤
x 1
One simple approach to constrained optimization is simply to modify gradient
descent taking the constraint into account. If we use a small constant step size ,
we can make gradient descent steps, then project the result back into S. If we use
a line search, we can search only over step sizes  that yield new x points that are
feasible, or we can project each point on the line back into the constraint region.
When possible, this method can be made more efficient by projecting the gradient
into the tangent space of the feasible region before taking the step or beginning
the line search ( , ).
Rosen 1960
A more sophisticated approach is to design a different, unconstrained opti-
mization problem whose solution can be converted into a solution to the original,
constrained optimization problem. For example, if we want to minimize f(x) for
93
CHAPTER 4. NUMERICAL COMPUTATION
x ∈ R2
with x constrained to have exactly unit L2
norm, we can instead minimize
g(θ) = f ([cos sin
θ, θ]
) with respect to θ, then return [cos sin
θ, θ] as the solution
to the original problem. This approach requires creativity; the transformation
between optimization problems must be designed specifically for each case we
encounter.
The Karush–Kuhn–Tucker (KKT) approach1 provides a very general so-
lution to constrained optimization. With the KKT approach, we introduce a
new function called the generalized Lagrangian or generalized Lagrange
function.
To define the Lagrangian, we first need to describe S in terms of equations
and inequalities. We want a description of S in terms of m functions g( )
i and n
functions h( )
j so that S = { | ∀
x i, g( )
i (x) = 0 and ∀j, h( )
j (x) ≤ 0}. The equations
involving g( )
i are called the equality constraints and the inequalities involving
h( )
j
are called .
inequality constraints
We introduce new variables λi andαj for each constraint, these are called the
KKT multipliers. The generalized Lagrangian is then defined as
L , , f
(x λ α) = ( ) +
x

i
λi g( )
i
( ) +
x

j
αjh( )
j
( )
x . (4.14)
We can now solve a constrained minimization problem using unconstrained
optimization of the generalized Lagrangian. Observe that, so long as at least one
feasible point exists and is not permitted to have value , then
f( )
x ∞
min
x
max
λ
max
α α
, ≥0
L , , .
(x λ α) (4.15)
has the same optimal objective function value and set of optimal points as
x
min
x∈S
f .
( )
x (4.16)
This follows because any time the constraints are satisfied,
max
λ
max
α α
, ≥0
L , , f ,
(x λ α) = ( )
x (4.17)
while any time a constraint is violated,
max
λ
max
α α
, ≥0
L , , .
(x λ α) = ∞ (4.18)
1
The KKT approach generalizes the method of Lagrange multipliers which allows equality
constraints but not inequality constraints.
94
CHAPTER 4. NUMERICAL COMPUTATION
These properties guarantee that no infeasible point can be optimal, and that the
optimum within the feasible points is unchanged.
To perform constrained maximization, we can construct the generalized La-
grange function of , which leads to this optimization problem:
−f( )
x
min
x
max
λ
max
α α
, ≥0
−f( ) +
x

i
λig( )
i
( ) +
x

j
αjh( )
j
( )
x . (4.19)
We may also convert this to a problem with maximization in the outer loop:
max
x
min
λ
min
α α
, ≥0
f( ) +
x

i
λig( )
i
( )
x −

j
αjh( )
j
( )
x . (4.20)
The sign of the term for the equality constraints does not matter; we may define it
with addition or subtraction as we wish, because the optimization is free to choose
any sign for each λi.
The inequality constraints are particularly interesting. We say that a constraint
h( )
i (x) is active if h( )
i (x∗) = 0. If a constraint is not active, then the solution to
the problem found using that constraint would remain at least a local solution if
that constraint were removed. It is possible that an inactive constraint excludes
other solutions. For example, a convex problem with an entire region of globally
optimal points (a wide, flat, region of equal cost) could have a subset of this
region eliminated by constraints, or a non-convex problem could have better local
stationary points excluded by a constraint that is inactive at convergence. However,
the point found at convergence remains a stationary point whether or not the
inactive constraints are included. Because an inactive h( )
i
has negative value, then
the solution to minx maxλ maxα α
, ≥0 L(x λ α
, , ) will have αi = 0. We can thus
observe that at the solution, α h
 (x) = 0. In other words, for all i, we know
that at least one of the constraints αi ≥ 0 and h( )
i
(x) ≤ 0 must be active at the
solution. To gain some intuition for this idea, we can say that either the solution
is on the boundary imposed by the inequality and we must use its KKT multiplier
to influence the solution to x, or the inequality has no influence on the solution
and we represent this by zeroing out its KKT multiplier.
A simple set of properties describe the optimal points of constrained opti-
mization problems. These properties are called the Karush-Kuhn-Tucker (KKT)
conditions ( , ;
Karush 1939 Kuhn and Tucker 1951
, ). They are necessary conditions,
but not always sufficient conditions, for a point to be optimal. The conditions are:
• The gradient of the generalized Lagrangian is zero.
• All constraints on both and the KKT multipliers are satisfied.
x
95
CHAPTER 4. NUMERICAL COMPUTATION
• The inequality constraints exhibit “complementary slackness”: α h
 (x) = 0.
For more information about the KKT approach, see Nocedal and Wright 2006
( ).
4.5 Example: Linear Least Squares
Suppose we want to find the value of that minimizes
x
f( ) =
x
1
2
|| − ||
Ax b 2
2. (4.21)
There are specialized linear algebra algorithms that can solve this problem efficiently.
However, we can also explore how to solve it using gradient-based optimization as
a simple example of how these techniques work.
First, we need to obtain the gradient:
∇x f( ) =
x A
( ) =
Ax b
− A
Ax A
− 
b. (4.22)
We can then follow this gradient downhill, taking small steps. See algorithm 4.1
for details.
Algorithm 4.1 An algorithm to minimize f(x) = 1
2 || − ||
Ax b 2
2 with respect to x
using gradient descent, starting from an arbitrary value of .
x
Set the step size ( ) and tolerance ( ) to small, positive numbers.
 δ
while ||AAx A
− b||2 > δ do
x x
← − 

AAx A
− b

end while
One can also solve this problem using Newton’s method. In this case, because
the true function is quadratic, the quadratic approximation employed by Newton’s
method is exact, and the algorithm converges to the global minimum in a single
step.
Now suppose we wish to minimize the same function, but subject to the
constraint xx ≤ 1. To do so, we introduce the Lagrangian
L , λ f λ
(x ) = ( ) +
x

x
x − 1

. (4.23)
We can now solve the problem
min
x
max
λ,λ≥0
L , λ .
(x ) (4.24)
96
CHAPTER 4. NUMERICAL COMPUTATION
The smallest-norm solution to the unconstrained least squares problem may be
found using the Moore-Penrose pseudoinverse: x = A+
b. If this point is feasible,
then it is the solution to the constrained problem. Otherwise, we must find a
solution where the constraint is active. By differentiating the Lagrangian with
respect to , we obtain the equation
x
A
Ax A
− 
b x
+ 2λ = 0. (4.25)
This tells us that the solution will take the form
x A
= ( 
A I
+ 2λ )−1
A
b. (4.26)
The magnitude of λ must be chosen such that the result obeys the constraint. We
can find this value by performing gradient ascent on . To do so, observe
λ
∂
∂λ
L , λ
(x ) = x
x − 1. (4.27)
When the norm of x exceeds 1, this derivative is positive, so to follow the derivative
uphill and increase the Lagrangian with respect to λ, we increase λ. Because the
coefficient on the xx penalty has increased, solving the linear equation for x will
now yield a solution with smaller norm. The process of solving the linear equation
and adjusting λ continues until x has the correct norm and the derivative on λ is
0.
This concludes the mathematical preliminaries that we use to develop machine
learning algorithms. We are now ready to build and analyze some full-fledged
learning systems.
97
Chapter 5
Machine Learning Basics
Deep learning is a specific kind of machine learning. In order to understand
deep learning well, one must have a solid understanding of the basic principles of
machine learning. This chapter provides a brief course in the most important general
principles that will be applied throughout the rest of the book. Novice readers or
those who want a wider perspective are encouraged to consider machine learning
textbooks with a more comprehensive coverage of the fundamentals, such as Murphy
( ) or ( ). If you are already familiar with machine learning basics,
2012 Bishop 2006
feel free to skip ahead to section . That section covers some perspectives
5.11
on traditional machine learning techniques that have strongly influenced the
development of deep learning algorithms.
We begin with a definition of what a learning algorithm is, and present an
example: the linear regression algorithm. We then proceed to describe how the
challenge of fitting the training data differs from the challenge of finding patterns
that generalize to new data. Most machine learning algorithms have settings
called hyperparameters that must be determined external to the learning algorithm
itself; we discuss how to set these using additional data. Machine learning is
essentially a form of applied statistics with increased emphasis on the use of
computers to statistically estimate complicated functions and a decreased emphasis
on proving confidence intervals around these functions; we therefore present the
two central approaches to statistics: frequentist estimators and Bayesian inference.
Most machine learning algorithms can be divided into the categories of supervised
learning and unsupervised learning; we describe these categories and give some
examples of simple learning algorithms from each category. Most deep learning
algorithms are based on an optimization algorithm called stochastic gradient
descent. We describe how to combine various algorithm components such as
98
CHAPTER 5. MACHINE LEARNING BASICS
an optimization algorithm, a cost function, a model, and a dataset to build a
machine learning algorithm. Finally, in section , we describe some of the
5.11
factors that have limited the ability of traditional machine learning to generalize.
These challenges have motivated the development of deep learning algorithms that
overcome these obstacles.
5.1 Learning Algorithms
A machine learning algorithm is an algorithm that is able to learn from data. But
what do we mean by learning? Mitchell 1997
( ) provides the definition “A computer
program is said to learn from experience E with respect to some class of tasks T
and performance measure P, if its performance at tasks in T, as measured by P,
improves with experience E.” One can imagine a very wide variety of experiences
E, tasks T, and performance measures P , and we do not make any attempt in this
book to provide a formal definition of what may be used for each of these entities.
Instead, the following sections provide intuitive descriptions and examples of the
different kinds of tasks, performance measures and experiences that can be used
to construct machine learning algorithms.
5.1.1 The Task, T
Machine learning allows us to tackle tasks that are too difficult to solve with
fixed programs written and designed by human beings. From a scientific and
philosophical point of view, machine learning is interesting because developing our
understanding of machine learning entails developing our understanding of the
principles that underlie intelligence.
In this relatively formal definition of the word “task,” the process of learning
itself is not the task. Learning is our means of attaining the ability to perform the
task. For example, if we want a robot to be able to walk, then walking is the task.
We could program the robot to learn to walk, or we could attempt to directly write
a program that specifies how to walk manually.
Machine learning tasks are usually described in terms of how the machine
learning system should process an example. An example is a collection of features
that have been quantitatively measured from some object or event that we want
the machine learning system to process. We typically represent an example as a
vector x ∈ Rn where each entry xi of the vector is another feature. For example,
the features of an image are usually the values of the pixels in the image.
99
CHAPTER 5. MACHINE LEARNING BASICS
Many kinds of tasks can be solved with machine learning. Some of the most
common machine learning tasks include the following:
• Classification: In this type of task, the computer program is asked to specify
which of k categories some input belongs to. To solve this task, the learning
algorithm is usually asked to produce a function f : Rn → {1, . . . , k}. When
y = f(x), the model assigns an input described by vector x to a category
identified by numeric code y. There are other variants of the classification
task, for example, where f outputs a probability distribution over classes.
An example of a classification task is object recognition, where the input
is an image (usually described as a set of pixel brightness values), and the
output is a numeric code identifying the object in the image. For example,
the Willow Garage PR2 robot is able to act as a waiter that can recognize
different kinds of drinks and deliver them to people on command (Good-
fellow 2010
et al., ). Modern object recognition is best accomplished with
deep learning ( , ; , ). Object
Krizhevsky et al. 2012 Ioffe and Szegedy 2015
recognition is the same basic technology that allows computers to recognize
faces (Taigman 2014
et al., ), which can be used to automatically tag people
in photo collections and allow computers to interact more naturally with
their users.
• Classification with missing inputs: Classification becomes more chal-
lenging if the computer program is not guaranteed that every measurement
in its input vector will always be provided. In order to solve the classification
task, the learning algorithm only has to define a function mapping
single
from a vector input to a categorical output. When some of the inputs may
be missing, rather than providing a single classification function, the learning
algorithm must learn a of functions. Each function corresponds to classi-
set
fying x with a different subset of its inputs missing. This kind of situation
arises frequently in medical diagnosis, because many kinds of medical tests
are expensive or invasive. One way to efficiently define such a large set
of functions is to learn a probability distribution over all of the relevant
variables, then solve the classification task by marginalizing out the missing
variables. With n input variables, we can now obtain all 2n different classifi-
cation functions needed for each possible set of missing inputs, but we only
need to learn a single function describing the joint probability distribution.
See Goodfellow 2013b
et al. ( ) for an example of a deep probabilistic model
applied to such a task in this way. Many of the other tasks described in this
section can also be generalized to work with missing inputs; classification
with missing inputs is just one example of what machine learning can do.
100
CHAPTER 5. MACHINE LEARNING BASICS
• Regression: In this type of task, the computer program is asked to predict a
numerical value given some input. To solve this task, the learning algorithm
is asked to output a function f : Rn
→ R. This type of task is similar to
classification, except that the format of output is different. An example of
a regression task is the prediction of the expected claim amount that an
insured person will make (used to set insurance premiums), or the prediction
of future prices of securities. These kinds of predictions are also used for
algorithmic trading.
• Transcription: In this type of task, the machine learning system is asked
to observe a relatively unstructured representation of some kind of data and
transcribe it into discrete, textual form. For example, in optical character
recognition, the computer program is shown a photograph containing an
image of text and is asked to return this text in the form of a sequence
of characters (e.g., in ASCII or Unicode format). Google Street View uses
deep learning to process address numbers in this way ( ,
Goodfellow et al.
2014d). Another example is speech recognition, where the computer program
is provided an audio waveform and emits a sequence of characters or word
ID codes describing the words that were spoken in the audio recording. Deep
learning is a crucial component of modern speech recognition systems used
at major companies including Microsoft, IBM and Google ( ,
Hinton et al.
2012b).
• Machine translation: In a machine translation task, the input already
consists of a sequence of symbols in some language, and the computer program
must convert this into a sequence of symbols in another language. This is
commonly applied to natural languages, such as translating from English to
French. Deep learning has recently begun to have an important impact on
this kind of task (Sutskever 2014 Bahdanau 2015
et al., ; et al., ).
• Structured output: Structured output tasks involve any task where the
output is a vector (or other data structure containing multiple values) with
important relationships between the different elements. This is a broad
category, and subsumes the transcription and translation tasks described
above, but also many other tasks. One example is parsing—mapping a
natural language sentence into a tree that describes its grammatical structure
and tagging nodes of the trees as being verbs, nouns, or adverbs, and so on.
See ( ) for an example of deep learning applied to a parsing
Collobert 2011
task. Another example is pixel-wise segmentation of images, where the
computer program assigns every pixel in an image to a specific category. For
101
CHAPTER 5. MACHINE LEARNING BASICS
example, deep learning can be used to annotate the locations of roads in
aerial photographs (Mnih and Hinton 2010
, ). The output need not have its
form mirror the structure of the input as closely as in these annotation-style
tasks. For example, in image captioning, the computer program observes an
image and outputs a natural language sentence describing the image (Kiros
et al. et al.
, , ;
2014a b Mao , ;
2015 Vinyals 2015b Donahue 2014
et al., ; et al., ;
Karpathy and Li 2015 Fang 2015 Xu 2015
, ; et al., ; et al., ). These tasks are
called structured output tasks because the program must output several
values that are all tightly inter-related. For example, the words produced by
an image captioning program must form a valid sentence.
• Anomaly detection: In this type of task, the computer program sifts
through a set of events or objects, and flags some of them as being unusual
or atypical. An example of an anomaly detection task is credit card fraud
detection. By modeling your purchasing habits, a credit card company can
detect misuse of your cards. If a thief steals your credit card or credit card
information, the thief’s purchases will often come from a different probability
distribution over purchase types than your own. The credit card company
can prevent fraud by placing a hold on an account as soon as that card has
been used for an uncharacteristic purchase. See ( ) for a
Chandola et al. 2009
survey of anomaly detection methods.
• Synthesis and sampling: In this type of task, the machine learning al-
gorithm is asked to generate new examples that are similar to those in the
training data. Synthesis and sampling via machine learning can be useful
for media applications where it can be expensive or boring for an artist to
generate large volumes of content by hand. For example, video games can
automatically generate textures for large objects or landscapes, rather than
requiring an artist to manually label each pixel ( , ). In some
Luo et al. 2013
cases, we want the sampling or synthesis procedure to generate some specific
kind of output given the input. For example, in a speech synthesis task, we
provide a written sentence and ask the program to emit an audio waveform
containing a spoken version of that sentence. This is a kind of structured
output task, but with the added qualification that there is no single correct
output for each input, and we explicitly desire a large amount of variation in
the output, in order for the output to seem more natural and realistic.
• Imputation of missing values: In this type of task, the machine learning
algorithm is given a new example x ∈ Rn, but with some entries xi of x
missing. The algorithm must provide a prediction of the values of the missing
entries.
102
CHAPTER 5. MACHINE LEARNING BASICS
• Denoising: In this type of task, the machine learning algorithm is given in
input a corrupted example x̃ ∈ Rn
obtained by an unknown corruption process
from a clean example x ∈ Rn
. The learner must predict the clean example
x from its corrupted version x̃, or more generally predict the conditional
probability distribution p(x | x̃).
• Density estimation or probability mass function estimation: In
the density estimation problem, the machine learning algorithm is asked
to learn a function pmodel : Rn → R, where pmodel(x) can be interpreted
as a probability density function (if x is continuous) or a probability mass
function (if x is discrete) on the space that the examples were drawn from.
To do such a task well (we will specify exactly what that means when we
discuss performance measures P), the algorithm needs to learn the structure
of the data it has seen. It must know where examples cluster tightly and
where they are unlikely to occur. Most of the tasks described above require
the learning algorithm to at least implicitly capture the structure of the
probability distribution. Density estimation allows us to explicitly capture
that distribution. In principle, we can then perform computations on that
distribution in order to solve the other tasks as well. For example, if we
have performed density estimation to obtain a probability distribution p(x),
we can use that distribution to solve the missing value imputation task. If
a value xi is missing and all of the other values, denoted x−i, are given,
then we know the distribution over it is given by p(xi | x−i). In practice,
density estimation does not always allow us to solve all of these related tasks,
because in many cases the required operations on p(x) are computationally
intractable.
Of course, many other tasks and types of tasks are possible. The types of tasks
we list here are intended only to provide examples of what machine learning can
do, not to define a rigid taxonomy of tasks.
5.1.2 The Performance Measure, P
In order to evaluate the abilities of a machine learning algorithm, we must design
a quantitative measure of its performance. Usually this performance measure P is
specific to the task being carried out by the system.
T
For tasks such as classification, classification with missing inputs, and tran-
scription, we often measure the accuracy of the model. Accuracy is just the
proportion of examples for which the model produces the correct output. We can
103
CHAPTER 5. MACHINE LEARNING BASICS
also obtain equivalent information by measuring the error rate, the proportion
of examples for which the model produces an incorrect output. We often refer to
the error rate as the expected 0-1 loss. The 0-1 loss on a particular example is 0
if it is correctly classified and 1 if it is not. For tasks such as density estimation,
it does not make sense to measure accuracy, error rate, or any other kind of 0-1
loss. Instead, we must use a different performance metric that gives the model
a continuous-valued score for each example. The most common approach is to
report the average log-probability the model assigns to some examples.
Usually we are interested in how well the machine learning algorithm performs
on data that it has not seen before, since this determines how well it will work when
deployed in the real world. We therefore evaluate these performance measures using
a test set of data that is separate from the data used for training the machine
learning system.
The choice of performance measure may seem straightforward and objective,
but it is often difficult to choose a performance measure that corresponds well to
the desired behavior of the system.
In some cases, this is because it is difficult to decide what should be measured.
For example, when performing a transcription task, should we measure the accuracy
of the system at transcribing entire sequences, or should we use a more fine-grained
performance measure that gives partial credit for getting some elements of the
sequence correct? When performing a regression task, should we penalize the
system more if it frequently makes medium-sized mistakes or if it rarely makes
very large mistakes? These kinds of design choices depend on the application.
In other cases, we know what quantity we would ideally like to measure, but
measuring it is impractical. For example, this arises frequently in the context of
density estimation. Many of the best probabilistic models represent probability
distributions only implicitly. Computing the actual probability value assigned to
a specific point in space in many such models is intractable. In these cases, one
must design an alternative criterion that still corresponds to the design objectives,
or design a good approximation to the desired criterion.
5.1.3 The Experience, E
Machine learning algorithms can be broadly categorized as unsupervised or
supervised by what kind of experience they are allowed to have during the
learning process.
Most of the learning algorithms in this book can be understood as being allowed
to experience an entire dataset. A dataset is a collection of many examples, as
104
CHAPTER 5. MACHINE LEARNING BASICS
defined in section . Sometimes we will also call examples .
5.1.1 data points
One of the oldest datasets studied by statisticians and machine learning re-
searchers is the Iris dataset ( , ). It is a collection of measurements of
Fisher 1936
different parts of 150 iris plants. Each individual plant corresponds to one example.
The features within each example are the measurements of each of the parts of the
plant: the sepal length, sepal width, petal length and petal width. The dataset
also records which species each plant belonged to. Three different species are
represented in the dataset.
Unsupervised learning algorithms experience a dataset containing many
features, then learn useful properties of the structure of this dataset. In the context
of deep learning, we usually want to learn the entire probability distribution that
generated a dataset, whether explicitly as in density estimation or implicitly for
tasks like synthesis or denoising. Some other unsupervised learning algorithms
perform other roles, like clustering, which consists of dividing the dataset into
clusters of similar examples.
Supervised learning algorithms experience a dataset containing features,
but each example is also associated with a label or target. For example, the Iris
dataset is annotated with the species of each iris plant. A supervised learning
algorithm can study the Iris dataset and learn to classify iris plants into three
different species based on their measurements.
Roughly speaking, unsupervised learning involves observing several examples
of a random vector x, and attempting to implicitly or explicitly learn the proba-
bility distribution p(x), or some interesting properties of that distribution, while
supervised learning involves observing several examples of a random vector x and
an associated value or vector y, and learning to predict y from x, usually by
estimating p(y x
| ). The term supervised learning originates from the view of
the target y being provided by an instructor or teacher who shows the machine
learning system what to do. In unsupervised learning, there is no instructor or
teacher, and the algorithm must learn to make sense of the data without this guide.
Unsupervised learning and supervised learning are not formally defined terms.
The lines between them are often blurred. Many machine learning technologies can
be used to perform both tasks. For example, the chain rule of probability states
that for a vector x ∈ Rn, the joint distribution can be decomposed as
p( ) =
x
n

i=1
p(xi | x1, . . . , xi−1). (5.1)
This decomposition means that we can solve the ostensibly unsupervised problem of
modeling p(x) by splitting it into n supervised learning problems. Alternatively, we
105
CHAPTER 5. MACHINE LEARNING BASICS
can solve the supervised learning problem of learning p(y | x) by using traditional
unsupervised learning technologies to learn the joint distribution p(x, y) and
inferring
p y
( | x) =
p , y
(x )

y p , y
(x )
. (5.2)
Though unsupervised learning and supervised learning are not completely formal or
distinct concepts, they do help to roughly categorize some of the things we do with
machine learning algorithms. Traditionally, people refer to regression, classification
and structured output problems as supervised learning. Density estimation in
support of other tasks is usually considered unsupervised learning.
Other variants of the learning paradigm are possible. For example, in semi-
supervised learning, some examples include a supervision target but others do
not. In multi-instance learning, an entire collection of examples is labeled as
containing or not containing an example of a class, but the individual members
of the collection are not labeled. For a recent example of multi-instance learning
with deep models, see Kotzias 2015
et al. ( ).
Some machine learning algorithms do not just experience a fixed dataset. For
example, reinforcement learning algorithms interact with an environment, so
there is a feedback loop between the learning system and its experiences. Such
algorithms are beyond the scope of this book. Please see ( )
Sutton and Barto 1998
or Bertsekas and Tsitsiklis 1996
( ) for information about reinforcement learning,
and ( ) for the deep learning approach to reinforcement learning.
Mnih et al. 2013
Most machine learning algorithms simply experience a dataset. A dataset can
be described in many ways. In all cases, a dataset is a collection of examples,
which are in turn collections of features.
One common way of describing a dataset is with a . A design
design matrix
matrix is a matrix containing a different example in each row. Each column of the
matrix corresponds to a different feature. For instance, the Iris dataset contains
150 examples with four features for each example. This means we can represent
the dataset with a design matrix X ∈ R150 4
×
, where Xi,1 is the sepal length of
plant i, Xi,2 is the sepal width of plant i, etc. We will describe most of the learning
algorithms in this book in terms of how they operate on design matrix datasets.
Of course, to describe a dataset as a design matrix, it must be possible to
describe each example as a vector, and each of these vectors must be the same size.
This is not always possible. For example, if you have a collection of photographs
with different widths and heights, then different photographs will contain different
numbers of pixels, so not all of the photographs may be described with the same
length of vector. Section and chapter describe how to handle different
9.7 10
106
CHAPTER 5. MACHINE LEARNING BASICS
types of such heterogeneous data. In cases like these, rather than describing the
dataset as a matrix with m rows, we will describe it as a set containing m elements:
{x(1)
, x(2)
, . . . , x( )
m
}. This notation does not imply that any two example vectors
x( )
i
and x( )
j
have the same size.
In the case of supervised learning, the example contains a label or target as
well as a collection of features. For example, if we want to use a learning algorithm
to perform object recognition from photographs, we need to specify which object
appears in each of the photos. We might do this with a numeric code, with 0
signifying a person, 1 signifying a car, 2 signifying a cat, etc. Often when working
with a dataset containing a design matrix of feature observations X, we also
provide a vector of labels , with
y yi providing the label for example .
i
Of course, sometimes the label may be more than just a single number. For
example, if we want to train a speech recognition system to transcribe entire
sentences, then the label for each example sentence is a sequence of words.
Just as there is no formal definition of supervised and unsupervised learning,
there is no rigid taxonomy of datasets or experiences. The structures described here
cover most cases, but it is always possible to design new ones for new applications.
5.1.4 Example: Linear Regression
Our definition of a machine learning algorithm as an algorithm that is capable
of improving a computer program’s performance at some task via experience is
somewhat abstract. To make this more concrete, we present an example of a
simple machine learning algorithm: linear regression. We will return to this
example repeatedly as we introduce more machine learning concepts that help to
understand its behavior.
As the name implies, linear regression solves a regression problem. In other
words, the goal is to build a system that can take a vector x ∈ Rn as input and
predict the value of a scalar y ∈ R as its output. In the case of linear regression,
the output is a linear function of the input. Let ŷ be the value that our model
predicts should take on. We define the output to be
y
ŷ = w
x (5.3)
where w ∈ Rn
is a vector of .
parameters
Parameters are values that control the behavior of the system. In this case, wi is
the coefficient that we multiply by feature xi before summing up the contributions
from all the features. We can think of w as a set of weights that determine how
each feature affects the prediction. If a feature xi receives a positive weight wi,
107
CHAPTER 5. MACHINE LEARNING BASICS
then increasing the value of that feature increases the value of our prediction ŷ.
If a feature receives a negative weight, then increasing the value of that feature
decreases the value of our prediction. If a feature’s weight is large in magnitude,
then it has a large effect on the prediction. If a feature’s weight is zero, it has no
effect on the prediction.
We thus have a definition of our task T : to predict y from x by outputting
ŷ = w
x. Next we need a definition of our performance measure, .
P
Suppose that we have a design matrix of m example inputs that we will not
use for training, only for evaluating how well the model performs. We also have
a vector of regression targets providing the correct value of y for each of these
examples. Because this dataset will only be used for evaluation, we call it the test
set. We refer to the design matrix of inputs as X( )
test and the vector of regression
targets as y( )
test .
One way of measuring the performance of the model is to compute the mean
squared error of the model on the test set. If ŷ( )
test gives the predictions of the
model on the test set, then the mean squared error is given by
MSEtest =
1
m

i
(ŷ( )
test − y( )
test
)2
i . (5.4)
Intuitively, one can see that this error measure decreases to 0 when ŷ( )
test = y( )
test .
We can also see that
MSEtest =
1
m
||ŷ( )
test
− y( )
test
||2
2 , (5.5)
so the error increases whenever the Euclidean distance between the predictions
and the targets increases.
To make a machine learning algorithm, we need to design an algorithm that
will improve the weights w in a way that reduces MSEtest when the algorithm
is allowed to gain experience by observing a training set (X( )
train , y( )
train ). One
intuitive way of doing this (which we will justify later, in section ) is just to
5.5.1
minimize the mean squared error on the training set, MSEtrain.
To minimize MSEtrain, we can simply solve for where its gradient is :
0
∇wMSEtrain = 0 (5.6)
⇒ ∇w
1
m
||ŷ( )
train
− y( )
train
||2
2 = 0 (5.7)
⇒
1
m
∇w||X( )
train
w y
− ( )
train
||2
2 = 0 (5.8)
108
CHAPTER 5. MACHINE LEARNING BASICS
− −
1 0
. 0 5 0 0 0 5 1 0
. . . .
x1
−3
−2
−1
0
1
2
3
y
Linear regression example
0 5 1 0 1 5
. . .
w1
0 20
.
0 25
.
0 30
.
0 35
.
0 40
.
0 45
.
0 50
.
0 55
.
MSE
(train)
Optimization of w
Figure 5.1: A linear regression problem, with a training set consisting of ten data points,
each containing one feature. Because there is only one feature, the weight vector w
contains only a single parameter to learn,w1. (Left)Observe that linear regression learns
to set w1 such that the line y = w1 x comes as close as possible to passing through all the
training points. The plotted point indicates the value of
(Right) w1 found by the normal
equations, which we can see minimizes the mean squared error on the training set.
⇒ ∇w

X ( )
train
w y
− ( )
train
 
X( )
train
w y
− ( )
train

= 0 (5.9)
⇒ ∇w

w
X ( )
train 
X( )
train
w w
− 2 
X( )
train 
y ( )
train
+ y( )
train 
y( )
train

= 0
(5.10)
⇒ 2X( )
train 
X( )
train
w X
− 2 ( )
train 
y( )
train
= 0 (5.11)
⇒ w =

X( )
train 
X( )
train
−1
X( )
train 
y( )
train
(5.12)
The system of equations whose solution is given by equation is known as
5.12
the normal equations. Evaluating equation constitutes a simple learning
5.12
algorithm. For an example of the linear regression learning algorithm in action,
see figure .
5.1
It is worth noting that the term linear regression is often used to refer to
a slightly more sophisticated model with one additional parameter—an intercept
term . In this model
b
ŷ = w
x + b (5.13)
so the mapping from parameters to predictions is still a linear function but the
mapping from features to predictions is now an affine function. This extension to
affine functions means that the plot of the model’s predictions still looks like a
line, but it need not pass through the origin. Instead of adding the bias parameter
109
CHAPTER 5. MACHINE LEARNING BASICS
b, one can continue to use the model with only weights but augment x with an
extra entry that is always set to . The weight corresponding to the extra entry
1 1
plays the role of the bias parameter. We will frequently use the term “linear” when
referring to affine functions throughout this book.
The intercept term b is often called the bias parameter of the affine transfor-
mation. This terminology derives from the point of view that the output of the
transformation is biased toward being b in the absence of any input. This term
is different from the idea of a statistical bias, in which a statistical estimation
algorithm’s expected estimate of a quantity is not equal to the true quantity.
Linear regression is of course an extremely simple and limited learning algorithm,
but it provides an example of how a learning algorithm can work. In the subsequent
sections we will describe some of the basic principles underlying learning algorithm
design and demonstrate how these principles can be used to build more complicated
learning algorithms.
5.2 Capacity, Overfitting and Underfitting
The central challenge in machine learning is that we must perform well on new,
previously unseen inputs—not just those on which our model was trained. The
ability to perform well on previously unobserved inputs is called generalization.
Typically, when training a machine learning model, we have access to a training
set, we can compute some error measure on the training set called the training
error, and we reduce this training error. So far, what we have described is simply
an optimization problem. What separates machine learning from optimization is
that we want the generalization error, also called the test error, to be low as
well. The generalization error is defined as the expected value of the error on a
new input. Here the expectation is taken across different possible inputs, drawn
from the distribution of inputs we expect the system to encounter in practice.
We typically estimate the generalization error of a machine learning model by
measuring its performance on a test set of examples that were collected separately
from the training set.
In our linear regression example, we trained the model by minimizing the
training error,
1
m( )
train
||X( )
train
w y
− ( )
train
||2
2, (5.14)
but we actually care about the test error, 1
m( )
test ||X( )
test w y
− ( )
test ||2
2.
How can we affect performance on the test set when we get to observe only the
110
CHAPTER 5. MACHINE LEARNING BASICS
training set? The field of statistical learning theory provides some answers. If
the training and the test set are collected arbitrarily, there is indeed little we can
do. If we are allowed to make some assumptions about how the training and test
set are collected, then we can make some progress.
The train and test data are generated by a probability distribution over datasets
called the data generating process. We typically make a set of assumptions
known collectively as the i.i.d. assumptions. These assumptions are that the
examples in each dataset are independent from each other, and that the train
set and test set are identically distributed, drawn from the same probability
distribution as each other. This assumption allows us to describe the data gen-
erating process with a probability distribution over a single example. The same
distribution is then used to generate every train example and every test example.
We call that shared underlying distribution the data generating distribution,
denoted pdata. This probabilistic framework and the i.i.d. assumptions allow us to
mathematically study the relationship between training error and test error.
One immediate connection we can observe between the training and test error
is that the expected training error of a randomly selected model is equal to the
expected test error of that model. Suppose we have a probability distribution
p(x, y) and we sample from it repeatedly to generate the train set and the test
set. For some fixed value w, the expected training set error is exactly the same as
the expected test set error, because both expectations are formed using the same
dataset sampling process. The only difference between the two conditions is the
name we assign to the dataset we sample.
Of course, when we use a machine learning algorithm, we do not fix the
parameters ahead of time, then sample both datasets. We sample the training set,
then use it to choose the parameters to reduce training set error, then sample the
test set. Under this process, the expected test error is greater than or equal to
the expected value of training error. The factors determining how well a machine
learning algorithm will perform are its ability to:
1. Make the training error small.
2. Make the gap between training and test error small.
These two factors correspond to the two central challenges in machine learning:
underfitting and overfitting. Underfitting occurs when the model is not able to
obtain a sufficiently low error value on the training set. Overfitting occurs when
the gap between the training error and test error is too large.
We can control whether a model is more likely to overfit or underfit by altering
its capacity. Informally, a model’s capacity is its ability to fit a wide variety of
111
CHAPTER 5. MACHINE LEARNING BASICS
functions. Models with low capacity may struggle to fit the training set. Models
with high capacity can overfit by memorizing properties of the training set that do
not serve them well on the test set.
One way to control the capacity of a learning algorithm is by choosing its
hypothesis space, the set of functions that the learning algorithm is allowed to
select as being the solution. For example, the linear regression algorithm has the
set of all linear functions of its input as its hypothesis space. We can generalize
linear regression to include polynomials, rather than just linear functions, in its
hypothesis space. Doing so increases the model’s capacity.
A polynomial of degree one gives us the linear regression model with which we
are already familiar, with prediction
ŷ b wx.
= + (5.15)
By introducing x2 as another feature provided to the linear regression model, we
can learn a model that is quadratic as a function of :
x
ŷ b w
= + 1x w
+ 2x2
. (5.16)
Though this model implements a quadratic function of its , the output is
input
still a linear function of the parameters, so we can still use the normal equations
to train the model in closed form. We can continue to add more powers of x as
additional features, for example to obtain a polynomial of degree 9:
ŷ b
= +
9

i=1
wixi
. (5.17)
Machine learning algorithms will generally perform best when their capacity
is appropriate for the true complexity of the task they need to perform and the
amount of training data they are provided with. Models with insufficient capacity
are unable to solve complex tasks. Models with high capacity can solve complex
tasks, but when their capacity is higher than needed to solve the present task they
may overfit.
Figure shows this principle in action. We compare a linear, quadratic
5.2
and degree-9 predictor attempting to fit a problem where the true underlying
function is quadratic. The linear function is unable to capture the curvature in
the true underlying problem, so it underfits. The degree-9 predictor is capable of
representing the correct function, but it is also capable of representing infinitely
many other functions that pass exactly through the training points, because we
112
CHAPTER 5. MACHINE LEARNING BASICS
have more parameters than training examples. We have little chance of choosing
a solution that generalizes well when so many wildly different solutions exist. In
this example, the quadratic model is perfectly matched to the true structure of
the task so it generalizes well to new data.









Figure 5.2: We fit three models to this example training set. The training data was
generated synthetically, by randomly sampling x values and choosing y deterministically
by evaluating a quadratic function. (Left)A linear function fit to the data suffers from
underfitting—it cannot capture the curvature that is present in the data. A
(Center)
quadratic function fit to the data generalizes well to unseen points. It does not suffer from
a significant amount of overfitting or underfitting. A polynomial of degree 9 fit to
(Right)
the data suffers from overfitting. Here we used the Moore-Penrose pseudoinverse to solve
the underdetermined normal equations. The solution passes through all of the training
points exactly, but we have not been lucky enough for it to extract the correct structure.
It now has a deep valley in between two training points that does not appear in the true
underlying function. It also increases sharply on the left side of the data, while the true
function decreases in this area.
So far we have described only one way of changing a model’s capacity: by
changing the number of input features it has, and simultaneously adding new
parameters associated with those features. There are in fact many ways of changing
a model’s capacity. Capacity is not determined only by the choice of model. The
model specifies which family of functions the learning algorithm can choose from
when varying the parameters in order to reduce a training objective. This is called
the representational capacity of the model. In many cases, finding the best
function within this family is a very difficult optimization problem. In practice,
the learning algorithm does not actually find the best function, but merely one
that significantly reduces the training error. These additional limitations, such as
113
CHAPTER 5. MACHINE LEARNING BASICS
the imperfection of the optimization algorithm, mean that the learning algorithm’s
effective capacity may be less than the representational capacity of the model
family.
Our modern ideas about improving the generalization of machine learning
models are refinements of thought dating back to philosophers at least as early
as Ptolemy. Many early scholars invoke a principle of parsimony that is now
most widely known as Occam’s razor (c. 1287-1347). This principle states that
among competing hypotheses that explain known observations equally well, one
should choose the “simplest” one. This idea was formalized and made more precise
in the 20th century by the founders of statistical learning theory (Vapnik and
Chervonenkis 1971 Vapnik 1982 Blumer 1989 Vapnik 1995
, ; , ; et al., ; , ).
Statistical learning theory provides various means of quantifying model capacity.
Among these, the most well-known is the Vapnik-Chervonenkis dimension, or
VC dimension. The VC dimension measures the capacity of a binary classifier. The
VC dimension is defined as being the largest possible value of m for which there
exists a training set of m different x points that the classifier can label arbitrarily.
Quantifying the capacity of the model allows statistical learning theory to
make quantitative predictions. The most important results in statistical learning
theory show that the discrepancy between training error and generalization error
is bounded from above by a quantity that grows as the model capacity grows but
shrinks as the number of training examples increases (Vapnik and Chervonenkis,
1971 Vapnik 1982 Blumer 1989 Vapnik 1995
; , ; et al., ; , ). These bounds provide
intellectual justification that machine learning algorithms can work, but they are
rarely used in practice when working with deep learning algorithms. This is in
part because the bounds are often quite loose and in part because it can be quite
difficult to determine the capacity of deep learning algorithms. The problem of
determining the capacity of a deep learning model is especially difficult because the
effective capacity is limited by the capabilities of the optimization algorithm, and
we have little theoretical understanding of the very general non-convex optimization
problems involved in deep learning.
We must remember that while simpler functions are more likely to generalize
(to have a small gap between training and test error) we must still choose a
sufficiently complex hypothesis to achieve low training error. Typically, training
error decreases until it asymptotes to the minimum possible error value as model
capacity increases (assuming the error measure has a minimum value). Typically,
generalization error has a U-shaped curve as a function of model capacity. This is
illustrated in figure .
5.3
To reach the most extreme case of arbitrarily high capacity, we introduce
114
CHAPTER 5. MACHINE LEARNING BASICS
0 Optimal Capacity
Capacity
Error
Underfitting zone Overfitting zone
Generalization gap
Training error
Generalization error
Figure 5.3: Typical relationship between capacity and error. Training and test error
behave differently. At the left end of the graph, training error and generalization error
are both high. This is the underfitting regime. As we increase capacity, training error
decreases, but the gap between training and generalization error increases. Eventually,
the size of this gap outweighs the decrease in training error, and we enter theoverfitting
regime, where capacity is too large, above the optimal capacity.
the concept of non-parametric models. So far, we have seen only parametric
models, such as linear regression. Parametric models learn a function described
by a parameter vector whose size is finite and fixed before any data is observed.
Non-parametric models have no such limitation.
Sometimes, non-parametric models are just theoretical abstractions (such as
an algorithm that searches over all possible probability distributions) that cannot
be implemented in practice. However, we can also design practical non-parametric
models by making their complexity a function of the training set size. One example
of such an algorithm is nearest neighbor regression. Unlike linear regression,
which has a fixed-length vector of weights, the nearest neighbor regression model
simply stores the X and y from the training set. When asked to classify a test
point x, the model looks up the nearest entry in the training set and returns the
associated regression target. In other words, ŷ = yi where i = arg min ||Xi,: − ||
x 2
2.
The algorithm can also be generalized to distance metrics other than the L2 norm,
such as learned distance metrics ( , ). If the algorithm is
Goldberger et al. 2005
allowed to break ties by averaging the yi values for all Xi,: that are tied for nearest,
then this algorithm is able to achieve the minimum possible training error (which
might be greater than zero, if two identical inputs are associated with different
outputs) on any regression dataset.
Finally, we can also create a non-parametric learning algorithm by wrapping a
115
CHAPTER 5. MACHINE LEARNING BASICS
parametric learning algorithm inside another algorithm that increases the number
of parameters as needed. For example, we could imagine an outer loop of learning
that changes the degree of the polynomial learned by linear regression on top of a
polynomial expansion of the input.
The ideal model is an oracle that simply knows the true probability distribution
that generates the data. Even such a model will still incur some error on many
problems, because there may still be some noise in the distribution. In the case
of supervised learning, the mapping from x to y may be inherently stochastic,
or y may be a deterministic function that involves other variables besides those
included in x. The error incurred by an oracle making predictions from the true
distribution is called the
p , y
(x ) Bayes error.
Training and generalization error vary as the size of the training set varies.
Expected generalization error can never increase as the number of training examples
increases. For non-parametric models, more data yields better generalization until
the best possible error is achieved. Any fixed parametric model with less than
optimal capacity will asymptote to an error value that exceeds the Bayes error. See
figure for an illustration. Note that it is possible for the model to have optimal
5.4
capacity and yet still have a large gap between training and generalization error.
In this situation, we may be able to reduce this gap by gathering more training
examples.
5.2.1 The No Free Lunch Theorem
Learning theory claims that a machine learning algorithm can generalize well from
a finite training set of examples. This seems to contradict some basic principles of
logic. Inductive reasoning, or inferring general rules from a limited set of examples,
is not logically valid. To logically infer a rule describing every member of a set,
one must have information about every member of that set.
In part, machine learning avoids this problem by offering only probabilistic rules,
rather than the entirely certain rules used in purely logical reasoning. Machine
learning promises to find rules that are probably most
correct about members of
the set they concern.
Unfortunately, even this does not resolve the entire problem. The no free
lunch theorem for machine learning (Wolpert 1996
, ) states that, averaged over
all possible data generating distributions, every classification algorithm has the
same error rate when classifying previously unobserved points. In other words,
in some sense, no machine learning algorithm is universally any better than any
other. The most sophisticated algorithm we can conceive of has the same average
116
CHAPTER 5. MACHINE LEARNING BASICS













































Figure 5.4: The effect of the training dataset size on the train and test error, as well as
on the optimal model capacity. We constructed a synthetic regression problem based on
adding a moderate amount of noise to a degree-5 polynomial, generated a single test set,
and then generated several different sizes of training set. For each size, we generated 40
different training sets in order to plot error bars showing 95 percent confidence intervals.
(Top)The MSE on the training and test set for two different models: a quadratic model,
and a model with degree chosen to minimize the test error. Both are fit in closed form. For
the quadratic model, the training error increases as the size of the training set increases.
This is because larger datasets are harder to fit. Simultaneously, the test error decreases,
because fewer incorrect hypotheses are consistent with the training data. The quadratic
model does not have enough capacity to solve the task, so its test error asymptotes to
a high value. The test error at optimal capacity asymptotes to the Bayes error. The
training error can fall below the Bayes error, due to the ability of the training algorithm
to memorize specific instances of the training set. As the training size increases to infinity,
the training error of any fixed-capacity model (here, the quadratic model) must rise to at
least the Bayes error. As the training set size increases, the optimal capacity
(Bottom)
(shown here as the degree of the optimal polynomial regressor) increases. The optimal
capacity plateaus after reaching sufficient complexity to solve the task.
117
CHAPTER 5. MACHINE LEARNING BASICS
performance (over all possible tasks) as merely predicting that every point belongs
to the same class.
Fortunately, these results hold only when we average over possible data
all
generating distributions. If we make assumptions about the kinds of probability
distributions we encounter in real-world applications, then we can design learning
algorithms that perform well on these distributions.
This means that the goal of machine learning research is not to seek a universal
learning algorithm or the absolute best learning algorithm. Instead, our goal is to
understand what kinds of distributions are relevant to the “real world” that an AI
agent experiences, and what kinds of machine learning algorithms perform well on
data drawn from the kinds of data generating distributions we care about.
5.2.2 Regularization
The no free lunch theorem implies that we must design our machine learning
algorithms to perform well on a specific task. We do so by building a set of
preferences into the learning algorithm. When these preferences are aligned with
the learning problems we ask the algorithm to solve, it performs better.
So far, the only method of modifying a learning algorithm that we have discussed
concretely is to increase or decrease the model’s representational capacity by adding
or removing functions from the hypothesis space of solutions the learning algorithm
is able to choose. We gave the specific example of increasing or decreasing the
degree of a polynomial for a regression problem. The view we have described so
far is oversimplified.
The behavior of our algorithm is strongly affected not just by how large we
make the set of functions allowed in its hypothesis space, but by the specific identity
of those functions. The learning algorithm we have studied so far, linear regression,
has a hypothesis space consisting of the set of linear functions of its input. These
linear functions can be very useful for problems where the relationship between
inputs and outputs truly is close to linear. They are less useful for problems
that behave in a very nonlinear fashion. For example, linear regression would
not perform very well if we tried to use it to predict sin(x) from x. We can thus
control the performance of our algorithms by choosing what kind of functions we
allow them to draw solutions from, as well as by controlling the amount of these
functions.
We can also give a learning algorithm a preference for one solution in its
hypothesis space to another. This means that both functions are eligible, but one
is preferred. The unpreferred solution will be chosen only if it fits the training
118
CHAPTER 5. MACHINE LEARNING BASICS
data significantly better than the preferred solution.
For example, we can modify the training criterion for linear regression to include
weight decay. To perform linear regression with weight decay, we minimize a sum
comprising both the mean squared error on the training and a criterion J (w) that
expresses a preference for the weights to have smaller squaredL2 norm. Specifically,
J( ) =
w MSEtrain + λw
w, (5.18)
where λ is a value chosen ahead of time that controls the strength of our preference
for smaller weights. When λ = 0, we impose no preference, and larger λ forces the
weights to become smaller. Minimizing J(w) results in a choice of weights that
make a tradeoff between fitting the training data and being small. This gives us
solutions that have a smaller slope, or put weight on fewer of the features. As an
example of how we can control a model’s tendency to overfit or underfit via weight
decay, we can train a high-degree polynomial regression model with different values
of . See figure for the results.
λ 5.5







 



 
 
Figure 5.5: We fit a high-degree polynomial regression model to our example training set
from figure . The true function is quadratic, but here we use only models with degree 9.
5.2
We vary the amount of weight decay to prevent these high-degree models from overfitting.
(Left)With very large λ, we can force the model to learn a function with no slope at
all. This underfits because it can only represent a constant function. With a
(Center)
medium value of , the learning algorithm recovers a curve with the right general shape.
λ
Even though the model is capable of representing functions with much more complicated
shape, weight decay has encouraged it to use a simpler function described by smaller
coefficients. With weight decay approaching zero (i.e., using the Moore-Penrose
(Right)
pseudoinverse to solve the underdetermined problem with minimal regularization), the
degree-9 polynomial overfits significantly, as we saw in figure .
5.2
119
CHAPTER 5. MACHINE LEARNING BASICS
More generally, we can regularize a model that learns a function f(x; θ) by
adding a penalty called a regularizer to the cost function. In the case of weight
decay, the regularizer is Ω(w) =w
w. In chapter , we will see that many other
7
regularizers are possible.
Expressing preferences for one function over another is a more general way
of controlling a model’s capacity than including or excluding members from the
hypothesis space. We can think of excluding a function from a hypothesis space as
expressing an infinitely strong preference against that function.
In our weight decay example, we expressed our preference for linear functions
defined with smaller weights explicitly, via an extra term in the criterion we
minimize. There are many other ways of expressing preferences for different
solutions, both implicitly and explicitly. Together, these different approaches
are known as regularization. Regularization is any modification we make to a
learning algorithm that is intended to reduce its generalization error but not its
training error. Regularization is one of the central concerns of the field of machine
learning, rivaled in its importance only by optimization.
The no free lunch theorem has made it clear that there is no best machine
learning algorithm, and, in particular, no best form of regularization. Instead
we must choose a form of regularization that is well-suited to the particular task
we want to solve. The philosophy of deep learning in general and this book in
particular is that a very wide range of tasks (such as all of the intellectual tasks
that people can do) may all be solved effectively using very general-purpose forms
of regularization.
5.3 Hyperparameters and Validation Sets
Most machine learning algorithms have several settings that we can use to control
the behavior of the learning algorithm. These settings are called hyperparame-
ters. The values of hyperparameters are not adapted by the learning algorithm
itself (though we can design a nested learning procedure where one learning
algorithm learns the best hyperparameters for another learning algorithm).
In the polynomial regression example we saw in figure , there is a single
5.2
hyperparameter: the degree of the polynomial, which acts as a capacity hyper-
parameter. The λ value used to control the strength of weight decay is another
example of a hyperparameter.
Sometimes a setting is chosen to be a hyperparameter that the learning al-
gorithm does not learn because it is difficult to optimize. More frequently, the
120
CHAPTER 5. MACHINE LEARNING BASICS
setting must be a hyperparameter because it is not appropriate to learn that
hyperparameter on the training set. This applies to all hyperparameters that
control model capacity. If learned on the training set, such hyperparameters would
always choose the maximum possible model capacity, resulting in overfitting (refer
to figure ). For example, we can always fit the training set better with a higher
5.3
degree polynomial and a weight decay setting of λ = 0 than we could with a lower
degree polynomial and a positive weight decay setting.
To solve this problem, we need a validation set of examples that the training
algorithm does not observe.
Earlier we discussed how a held-out test set, composed of examples coming from
the same distribution as the training set, can be used to estimate the generalization
error of a learner, after the learning process has completed. It is important that the
test examples are not used in any way to make choices about the model, including
its hyperparameters. For this reason, no example from the test set can be used
in the validation set. Therefore, we always construct the validation set from the
training data. Specifically, we split the training data into two disjoint subsets. One
of these subsets is used to learn the parameters. The other subset is our validation
set, used to estimate the generalization error during or after training, allowing
for the hyperparameters to be updated accordingly. The subset of data used to
learn the parameters is still typically called the training set, even though this
may be confused with the larger pool of data used for the entire training process.
The subset of data used to guide the selection of hyperparameters is called the
validation set. Typically, one uses about 80% of the training data for training and
20% for validation. Since the validation set is used to “train” the hyperparameters,
the validation set error will underestimate the generalization error, though typically
by a smaller amount than the training error. After all hyperparameter optimization
is complete, the generalization error may be estimated using the test set.
In practice, when the same test set has been used repeatedly to evaluate
performance of different algorithms over many years, and especially if we consider
all the attempts from the scientific community at beating the reported state-of-
the-art performance on that test set, we end up having optimistic evaluations with
the test set as well. Benchmarks can thus become stale and then do not reflect the
true field performance of a trained system. Thankfully, the community tends to
move on to new (and usually more ambitious and larger) benchmark datasets.
121
CHAPTER 5. MACHINE LEARNING BASICS
5.3.1 Cross-Validation
Dividing the dataset into a fixed training set and a fixed test set can be problematic
if it results in the test set being small. A small test set implies statistical uncertainty
around the estimated average test error, making it difficult to claim that algorithm
A works better than algorithm on the given task.
B
When the dataset has hundreds of thousands of examples or more, this is not a
serious issue. When the dataset is too small, are alternative procedures enable one
to use all of the examples in the estimation of the mean test error, at the price of
increased computational cost. These procedures are based on the idea of repeating
the training and testing computation on different randomly chosen subsets or splits
of the original dataset. The most common of these is the k-fold cross-validation
procedure, shown in algorithm , in which a partition of the dataset is formed by
5.1
splitting it into k non-overlapping subsets. The test error may then be estimated
by taking the average test error across k trials. On trial i, the i-th subset of the
data is used as the test set and the rest of the data is used as the training set. One
problem is that there exist no unbiased estimators of the variance of such average
error estimators (Bengio and Grandvalet 2004
, ), but approximations are typically
used.
5.4 Estimators, Bias and Variance
The field of statistics gives us many tools that can be used to achieve the machine
learning goal of solving a task not only on the training set but also to generalize.
Foundational concepts such as parameter estimation, bias and variance are useful
to formally characterize notions of generalization, underfitting and overfitting.
5.4.1 Point Estimation
Point estimation is the attempt to provide the single “best” prediction of some
quantity of interest. In general the quantity of interest can be a single parameter
or a vector of parameters in some parametric model, such as the weights in our
linear regression example in section , but it can also be a whole function.
5.1.4
In order to distinguish estimates of parameters from their true value, our
convention will be to denote a point estimate of a parameter by
θ θ̂.
Let {x(1)
, . . . , x( )
m } be a set of m independent and identically distributed
122
CHAPTER 5. MACHINE LEARNING BASICS
Algorithm 5.1 The k-fold cross-validation algorithm. It can be used to estimate
generalization error of a learning algorithm A when the given dataset D is too
small for a simple train/test or train/valid split to yield accurate estimation of
generalization error, because the mean of a loss L on a small test set may have too
high variance. The dataset D contains as elements the abstract examples z( )
i
(for
the i-th example), which could stand for an (input,target) pair z( )
i
= (x( )
i
, y( )
i
)
in the case of supervised learning, or for just an input z( )
i
= x( )
i
in the case
of unsupervised learning. The algorithm returns the vector of errors e for each
example in D, whose mean is the estimated generalization error. The errors on
individual examples can be used to compute a confidence interval around the mean
(equation ). While these confidence intervals are not well-justified after the
5.47
use of cross-validation, it is still common practice to use them to declare that
algorithm A is better than algorithm B only if the confidence interval of the error
of algorithm A lies below and does not intersect the confidence interval of algorithm
B.
Define KFoldXV( ):
D, A, L, k
Require: D, the given dataset, with elements z( )
i
Require: A, the learning algorithm, seen as a function that takes a dataset as
input and outputs a learned function
Require: L, the loss function, seen as a function from a learned function f and
an example z( )
i ∈ ∈
D to a scalar R
Require: k, the number of folds
Split into mutually exclusive subsets
D k Di, whose union is .
D
for do
i k
from to
1
fi = (
A D D
 i)
for z( )
j
in Di do
ej = (
L fi , z( )
j
)
end for
end for
Return e
123
CHAPTER 5. MACHINE LEARNING BASICS
(i.i.d.) data points. A or is any function of the data:
point estimator statistic
θ̂m = (
g x(1)
, . . . , x( )
m
). (5.19)
The definition does not require that g return a value that is close to the true
θ or even that the range of g is the same as the set of allowable values of θ.
This definition of a point estimator is very general and allows the designer of an
estimator great flexibility. While almost any function thus qualifies as an estimator,
a good estimator is a function whose output is close to the true underlying θ that
generated the training data.
For now, we take the frequentist perspective on statistics. That is, we assume
that the true parameter value θ is fixed but unknown, while the point estimate
θ̂ is a function of the data. Since the data is drawn from a random process, any
function of the data is random. Therefore θ̂ is a random variable.
Point estimation can also refer to the estimation of the relationship between
input and target variables. We refer to these types of point estimates as function
estimators.
Function Estimation As we mentioned above, sometimes we are interested in
performing function estimation (or function approximation). Here we are trying to
predict a variable y given an input vector x. We assume that there is a function
f(x) that describes the approximate relationship between y and x. For example,
we may assume that y = f(x) + , where  stands for the part of y that is not
predictable from x. In function estimation, we are interested in approximating
f with a model or estimate ˆ
f. Function estimation is really just the same as
estimating a parameter θ; the function estimator ˆ
f is simply a point estimator in
function space. The linear regression example (discussed above in section ) and
5.1.4
the polynomial regression example (discussed in section ) are both examples of
5.2
scenarios that may be interpreted either as estimating a parameter w or estimating
a function ˆ
f y
mapping from to
x .
We now review the most commonly studied properties of point estimators and
discuss what they tell us about these estimators.
5.4.2 Bias
The bias of an estimator is defined as:
bias(θ̂m) = (
E θ̂m) − θ (5.20)
124
CHAPTER 5. MACHINE LEARNING BASICS
where the expectation is over the data (seen as samples from a random variable)
and θ is the true underlying value of θ used to define the data generating distri-
bution. An estimator θ̂m is said to be unbiased if bias(θ̂m) = 0, which implies
that E(θ̂m) = θ. An estimator ˆ
θm is said to be asymptotically unbiased if
limm→∞ bias(θ̂m) = 0, which implies that limm→∞ E(ˆ
θm) = θ.
Example: Bernoulli Distribution Consider a set of samples {x(1)
, . . . , x( )
m
}
that are independently and identically distributed according to a Bernoulli distri-
bution with mean :
θ
P x
( ( )
i
; ) =
θ θx ( )
i
(1 )
− θ (1−x( )
i )
. (5.21)
A common estimator for the θ parameter of this distribution is the mean of the
training samples:
θ̂m =
1
m
m

i=1
x( )
i
. (5.22)
To determine whether this estimator is biased, we can substitute equation 5.22
into equation :
5.20
bias(θ̂m) = [
E ˆ
θm] − θ (5.23)
= E

1
m
m

i=1
x( )
i

− θ (5.24)
=
1
m
m

i=1
E

x( )
i

− θ (5.25)
=
1
m
m

i=1
1

x( )
i =0

x( )
i
θx ( )
i
(1 )
− θ (1−x( )
i )

− θ (5.26)
=
1
m
m

i=1
( )
θ − θ (5.27)
= = 0
θ θ
− (5.28)
Since bias(θ̂) = 0, we say that our estimator θ̂ is unbiased.
Example: Gaussian Distribution Estimator of the Mean Now, consider
a set of samples {x(1), . . . , x( )
m } that are independently and identically distributed
according to a Gaussian distribution p(x( )
i ) = N (x( )
i ; µ, σ2), where i ∈ {1, . . . , m}.
125
CHAPTER 5. MACHINE LEARNING BASICS
Recall that the Gaussian probability density function is given by
p x
( ( )
i
; µ, σ2
) =
1
√
2πσ2
exp

−
1
2
(x( )
i − µ)2
σ2

. (5.29)
A common estimator of the Gaussian mean parameter is known as the sample
mean:
µ̂m =
1
m
m

i=1
x( )
i
(5.30)
To determine the bias of the sample mean, we are again interested in calculating
its expectation:
bias(µ̂m ) = [ˆ
E µm] − µ (5.31)
= E

1
m
m

i=1
x( )
i

− µ (5.32)
=

1
m
m

i=1
E

x( )
i


− µ (5.33)
=

1
m
m

i=1
µ

− µ (5.34)
= = 0
µ µ
− (5.35)
Thus we find that the sample mean is an unbiased estimator of Gaussian mean
parameter.
Example: Estimators of the Variance of a Gaussian Distribution As an
example, we compare two different estimators of the variance parameter σ2 of a
Gaussian distribution. We are interested in knowing if either estimator is biased.
The first estimator of σ2
we consider is known as the sample variance:
σ̂2
m =
1
m
m

i=1

x( )
i
− µ̂m
2
, (5.36)
where µ̂m is the sample mean, defined above. More formally, we are interested in
computing
bias(σ̂2
m) = [ˆ
E σ2
m] − σ2
(5.37)
126
CHAPTER 5. MACHINE LEARNING BASICS
We begin by evaluating the term E[σ̂2
m ]:
E[σ̂2
m] =E

1
m
m

i=1

x( )
i − µ̂m
2

(5.38)
=
m − 1
m
σ2
(5.39)
Returning to equation , we conclude that the bias of
5.37 σ̂2
m is −σ2/m. Therefore,
the sample variance is a biased estimator.
The unbiased sample variance estimator
σ̃2
m =
1
m − 1
m

i=1

x( )
i
− µ̂m
2
(5.40)
provides an alternative approach. As the name suggests this estimator is unbiased.
That is, we find that E[σ̃2
m] = σ2
:
E[σ̃2
m] = E

1
m − 1
m

i=1

x( )
i − µ̂m
2

(5.41)
=
m
m − 1
E[σ̂2
m ] (5.42)
=
m
m − 1

m − 1
m
σ2

(5.43)
= σ2
. (5.44)
We have two estimators: one is biased and the other is not. While unbiased
estimators are clearly desirable, they are not always the “best” estimators. As we
will see we often use biased estimators that possess other important properties.
5.4.3 Variance and Standard Error
Another property of the estimator that we might want to consider is how much
we expect it to vary as a function of the data sample. Just as we computed the
expectation of the estimator to determine its bias, we can compute its variance.
The variance of an estimator is simply the variance
Var(ˆ
θ) (5.45)
where the random variable is the training set. Alternately, the square root of the
variance is called the , denoted
standard error SE(θ̂).
127
CHAPTER 5. MACHINE LEARNING BASICS
The variance or the standard error of an estimator provides a measure of how
we would expect the estimate we compute from data to vary as we independently
resample the dataset from the underlying data generating process. Just as we
might like an estimator to exhibit low bias we would also like it to have relatively
low variance.
When we compute any statistic using a finite number of samples, our estimate
of the true underlying parameter is uncertain, in the sense that we could have
obtained other samples from the same distribution and their statistics would have
been different. The expected degree of variation in any estimator is a source of
error that we want to quantify.
The standard error of the mean is given by
SE(µ̂m) =



Var

1
m
m

i=1
x( )
i

=
σ
√
m
, (5.46)
where σ2 is the true variance of the samples xi. The standard error is often
estimated by using an estimate of σ. Unfortunately, neither the square root of
the sample variance nor the square root of the unbiased estimator of the variance
provide an unbiased estimate of the standard deviation. Both approaches tend
to underestimate the true standard deviation, but are still used in practice. The
square root of the unbiased estimator of the variance is less of an underestimate.
For large , the approximation is quite reasonable.
m
The standard error of the mean is very useful in machine learning experiments.
We often estimate the generalization error by computing the sample mean of the
error on the test set. The number of examples in the test set determines the
accuracy of this estimate. Taking advantage of the central limit theorem, which
tells us that the mean will be approximately distributed with a normal distribution,
we can use the standard error to compute the probability that the true expectation
falls in any chosen interval. For example, the 95% confidence interval centered on
the mean µ̂m is
(µ̂m − 1 96SE(ˆ
. µm) ˆ
, µm + 1 96SE(ˆ
. µm)), (5.47)
under the normal distribution with mean µ̂m and variance SE(µ̂m)2. In machine
learning experiments, it is common to say that algorithmA is better than algorithm
B if the upper bound of the 95% confidence interval for the error of algorithm A is
less than the lower bound of the 95% confidence interval for the error of algorithm
B.
128
CHAPTER 5. MACHINE LEARNING BASICS
Example: Bernoulli Distribution We once again consider a set of samples
{x(1)
, . . . , x( )
m
} drawn independently and identically from a Bernoulli distribution
(recall P(x( )
i
;θ) = θx( )
i
(1 − θ)(1−x( )
i )
). This time we are interested in computing
the variance of the estimator θ̂m = 1
m
m
i=1 x( )
i
.
Var

θ̂m

= Var

1
m
m

i=1
x( )
i

(5.48)
=
1
m2
m

i=1
Var

x( )
i

(5.49)
=
1
m2
m

i=1
θ θ
(1 − ) (5.50)
=
1
m2
mθ θ
(1 − ) (5.51)
=
1
m
θ θ
(1 − ) (5.52)
The variance of the estimator decreases as a function of m, the number of examples
in the dataset. This is a common property of popular estimators that we will
return to when we discuss consistency (see section ).
5.4.5
5.4.4 Trading off Bias and Variance to Minimize Mean Squared
Error
Bias and variance measure two different sources of error in an estimator. Bias
measures the expected deviation from the true value of the function or parameter.
Variance on the other hand, provides a measure of the deviation from the expected
estimator value that any particular sampling of the data is likely to cause.
What happens when we are given a choice between two estimators, one with
more bias and one with more variance? How do we choose between them? For
example, imagine that we are interested in approximating the function shown in
figure and we are only offered the choice between a model with large bias and
5.2
one that suffers from large variance. How do we choose between them?
The most common way to negotiate this trade-off is to use cross-validation.
Empirically, cross-validation is highly successful on many real-world tasks. Alter-
natively, we can also compare the mean squared error (MSE) of the estimates:
MSE = [(
E θ̂m − θ)2
] (5.53)
= Bias(θ̂m)2
+ Var(ˆ
θm ) (5.54)
129
CHAPTER 5. MACHINE LEARNING BASICS
The MSE measures the overall expected deviation—in a squared error sense—
between the estimator and the true value of the parameter θ. As is clear from
equation , evaluating the MSE incorporates both the bias and the variance.
5.54
Desirable estimators are those with small MSE and these are estimators that
manage to keep both their bias and variance somewhat in check.
Capacity
Bias Generalization
error Variance
Optimal
capacity
Overfitting zone
Underfitting zone
Figure 5.6: As capacity increases (x-axis), bias (dotted) tends to decrease and variance
(dashed) tends to increase, yielding another U-shaped curve for generalization error (bold
curve). If we vary capacity along one axis, there is an optimal capacity, with underfitting
when the capacity is below this optimum and overfitting when it is above. This relationship
is similar to the relationship between capacity, underfitting, and overfitting, discussed in
section and figure .
5.2 5.3
The relationship between bias and variance is tightly linked to the machine
learning concepts of capacity, underfitting and overfitting. In the case where gen-
eralization error is measured by the MSE (where bias and variance are meaningful
components of generalization error), increasing capacity tends to increase variance
and decrease bias. This is illustrated in figure , where we see again the U-shaped
5.6
curve of generalization error as a function of capacity.
5.4.5 Consistency
So far we have discussed the properties of various estimators for a training set of
fixed size. Usually, we are also concerned with the behavior of an estimator as the
amount of training data grows. In particular, we usually wish that, as the number
of data points m in our dataset increases, our point estimates converge to the true
130
CHAPTER 5. MACHINE LEARNING BASICS
value of the corresponding parameters. More formally, we would like that
plimm→∞θ̂m = θ. (5.55)
The symbol plim indicates convergence in probability, meaning that for any  > 0,
P(|ˆ
θm − |
θ > ) → 0 as m → ∞. The condition described by equation is
5.55
known as consistency. It is sometimes referred to as weak consistency, with
strong consistency referring to the almost sure convergence of θ̂ to θ. Almost
sure convergence of a sequence of random variables x(1)
, x(2)
, . . . to a value x
occurs when p(limm→∞ x( )
m
= ) = 1
x .
Consistency ensures that the bias induced by the estimator diminishes as the
number of data examples grows. However, the reverse is not true—asymptotic
unbiasedness does not imply consistency. For example, consider estimating the
mean parameter µ of a normal distribution N(x; µ, σ2), with a dataset consisting
of m samples: {x(1)
, . . . , x( )
m }. We could use the first sample x(1) of the dataset
as an unbiased estimator: θ̂ = x(1)
. In that case, E(ˆ
θm) = θ so the estimator
is unbiased no matter how many data points are seen. This, of course, implies
that the estimate is asymptotically unbiased. However, this is not a consistent
estimator as it is the case that
not ˆ
θm → → ∞
θ m
as .
5.5 Maximum Likelihood Estimation
Previously, we have seen some definitions of common estimators and analyzed
their properties. But where did these estimators come from? Rather than guessing
that some function might make a good estimator and then analyzing its bias and
variance, we would like to have some principle from which we can derive specific
functions that are good estimators for different models.
The most common such principle is the maximum likelihood principle.
Consider a set of m examples X = {x(1)
, . . . , x( )
m } drawn independently from
the true but unknown data generating distribution pdata( )
x .
Let pmodel(x;θ) be a parametric family of probability distributions over the
same space indexed by θ. In other words, pmodel(x;θ) maps any configuration x
to a real number estimating the true probability pdata( )
x .
The maximum likelihood estimator for is then defined as
θ
θML = arg max
θ
pmodel( ; )
X θ (5.56)
= arg max
θ
m

i=1
pmodel(x( )
i
; )
θ (5.57)
131
CHAPTER 5. MACHINE LEARNING BASICS
This product over many probabilities can be inconvenient for a variety of reasons.
For example, it is prone to numerical underflow. To obtain a more convenient
but equivalent optimization problem, we observe that taking the logarithm of the
likelihood does not change its arg max but does conveniently transform a product
into a sum:
θML = arg max
θ
m

i=1
log pmodel(x( )
i
; )
θ . (5.58)
Because the arg max does not change when we rescale the cost function, we can
divide by m to obtain a version of the criterion that is expressed as an expectation
with respect to the empirical distribution p̂data defined by the training data:
θML = arg max
θ
Ex∼p̂data
log pmodel ( ; )
x θ . (5.59)
One way to interpret maximum likelihood estimation is to view it as minimizing
the dissimilarity between the empirical distribution p̂data defined by the training
set and the model distribution, with the degree of dissimilarity between the two
measured by the KL divergence. The KL divergence is given by
DKL (p̂data pmodel) = Ex∼p̂data
[log p̂data ( ) log
x − pmodel( )]
x . (5.60)
The term on the left is a function only of the data generating process, not the
model. This means when we train the model to minimize the KL divergence, we
need only minimize
− Ex∼p̂data [log pmodel( )]
x (5.61)
which is of course the same as the maximization in equation .
5.59
Minimizing this KL divergence corresponds exactly to minimizing the cross-
entropy between the distributions. Many authors use the term “cross-entropy” to
identify specifically the negative log-likelihood of a Bernoulli or softmax distribution,
but that is a misnomer. Any loss consisting of a negative log-likelihood is a cross-
entropy between the empirical distribution defined by the training set and the
probability distribution defined by model. For example, mean squared error is the
cross-entropy between the empirical distribution and a Gaussian model.
We can thus see maximum likelihood as an attempt to make the model dis-
tribution match the empirical distribution p̂data. Ideally, we would like to match
the true data generating distribution pdata, but we have no direct access to this
distribution.
While the optimal θ is the same regardless of whether we are maximizing the
likelihood or minimizing the KL divergence, the values of the objective functions
132
CHAPTER 5. MACHINE LEARNING BASICS
are different. In software, we often phrase both as minimizing a cost function.
Maximum likelihood thus becomes minimization of the negative log-likelihood
(NLL), or equivalently, minimization of the cross entropy. The perspective of
maximum likelihood as minimum KL divergence becomes helpful in this case
because the KL divergence has a known minimum value of zero. The negative
log-likelihood can actually become negative when is real-valued.
x
5.5.1 Conditional Log-Likelihood and Mean Squared Error
The maximum likelihood estimator can readily be generalized to the case where
our goal is to estimate a conditional probability P(y x
| ; θ) in order to predict y
given x. This is actually the most common situation because it forms the basis for
most supervised learning. If X represents all our inputs and Y all our observed
targets, then the conditional maximum likelihood estimator is
θML = arg max
θ
P .
( ; )
Y X
| θ (5.62)
If the examples are assumed to be i.i.d., then this can be decomposed into
θML = arg max
θ
m

i=1
log (
P y( )
i
| x( )
i
; )
θ . (5.63)
Example: Linear Regression as Maximum Likelihood Linear regression,
introduced earlier in section , may be justified as a maximum likelihood
5.1.4
procedure. Previously, we motivated linear regression as an algorithm that learns
to take an input x and produce an output value ŷ. The mapping from x to ŷ is
chosen to minimize mean squared error, a criterion that we introduced more or less
arbitrarily. We now revisit linear regression from the point of view of maximum
likelihood estimation. Instead of producing a single prediction ŷ, we now think
of the model as producing a conditional distribution p(y | x). We can imagine
that with an infinitely large training set, we might see several training examples
with the same input value x but different values of y. The goal of the learning
algorithm is now to fit the distribution p(y | x) to all of those different y values
that are all compatible with x. To derive the same linear regression algorithm
we obtained before, we define p(y | x) = N (y;ŷ(x;w), σ2). The function ŷ(x; w)
gives the prediction of the mean of the Gaussian. In this example, we assume that
the variance is fixed to some constant σ 2 chosen by the user. We will see that this
choice of the functional form of p(y | x) causes the maximum likelihood estimation
procedure to yield the same learning algorithm as we developed before. Since the
133
CHAPTER 5. MACHINE LEARNING BASICS
examples are assumed to be i.i.d., the conditional log-likelihood (equation ) is
5.63
given by
m

i=1
log (
p y( )
i | x( )
i
; )
θ (5.64)
= log
− m σ −
m
2
log(2 )
π −
m

i=1

ŷ( )
i
− y( )
i

2
2σ2 , (5.65)
where ŷ( )
i
is the output of the linear regression on the i-th input x( )
i
and m is the
number of the training examples. Comparing the log-likelihood with the mean
squared error,
MSEtrain =
1
m
m

i=1
||ŷ ( )
i
− y( )
i
||2
, (5.66)
we immediately see that maximizing the log-likelihood with respect to w yields
the same estimate of the parameters w as does minimizing the mean squared error.
The two criteria have different values but the same location of the optimum. This
justifies the use of the MSE as a maximum likelihood estimation procedure. As we
will see, the maximum likelihood estimator has several desirable properties.
5.5.2 Properties of Maximum Likelihood
The main appeal of the maximum likelihood estimator is that it can be shown to
be the best estimator asymptotically, as the number of examples m → ∞, in terms
of its rate of convergence as increases.
m
Under appropriate conditions, the maximum likelihood estimator has the
property of consistency (see section above), meaning that as the number
5.4.5
of training examples approaches infinity, the maximum likelihood estimate of a
parameter converges to the true value of the parameter. These conditions are:
• The true distribution pdata must lie within the model family pmodel(·; θ).
Otherwise, no estimator can recover pdata .
• The true distribution pdata must correspond to exactly one value of θ. Other-
wise, maximum likelihood can recover the correct pdata , but will not be able
to determine which value of was used by the data generating processing.
θ
There are other inductive principles besides the maximum likelihood estima-
tor, many of which share the property of being consistent estimators. However,
134
CHAPTER 5. MACHINE LEARNING BASICS
consistent estimators can differ in their statistic efficiency, meaning that one
consistent estimator may obtain lower generalization error for a fixed number of
samples m, or equivalently, may require fewer examples to obtain a fixed level of
generalization error.
Statistical efficiency is typically studied in the parametric case (like in linear
regression) where our goal is to estimate the value of a parameter (and assuming
it is possible to identify the true parameter), not the value of a function. A way to
measure how close we are to the true parameter is by the expected mean squared
error, computing the squared difference between the estimated and true parameter
values, where the expectation is over m training samples from the data generating
distribution. That parametric mean squared error decreases as m increases, and
for m large, the Cramér-Rao lower bound ( , ; , ) shows that no
Rao 1945 Cramér 1946
consistent estimator has a lower mean squared error than the maximum likelihood
estimator.
For these reasons (consistency and efficiency), maximum likelihood is often
considered the preferred estimator to use for machine learning. When the number
of examples is small enough to yield overfitting behavior, regularization strategies
such as weight decay may be used to obtain a biased version of maximum likelihood
that has less variance when training data is limited.
5.6 Bayesian Statistics
So far we have discussed frequentist statistics and approaches based on estimat-
ing a single value of θ, then making all predictions thereafter based on that one
estimate. Another approach is to consider all possible values of θ when making a
prediction. The latter is the domain of Bayesian statistics.
As discussed in section , the frequentist perspective is that the true
5.4.1
parameter value θ is fixed but unknown, while the point estimate ˆ
θ is a random
variable on account of it being a function of the dataset (which is seen as random).
The Bayesian perspective on statistics is quite different. The Bayesian uses
probability to reflect degrees of certainty of states of knowledge. The dataset is
directly observed and so is not random. On the other hand, the true parameter θ
is unknown or uncertain and thus is represented as a random variable.
Before observing the data, we represent our knowledge of θ using the prior
probability distribution, p(θ) (sometimes referred to as simply “the prior”).
Generally, the machine learning practitioner selects a prior distribution that is
quite broad (i.e. with high entropy) to reflect a high degree of uncertainty in the
135
CHAPTER 5. MACHINE LEARNING BASICS
value of θ before observing any data. For example, one might assume that
a priori
θ lies in some finite range or volume, with a uniform distribution. Many priors
instead reflect a preference for “simpler” solutions (such as smaller magnitude
coefficients, or a function that is closer to being constant).
Now consider that we have a set of data samples {x(1), . . . , x( )
m }. We can
recover the effect of data on our belief about θ by combining the data likelihood
p x
( (1)
, . . . , x( )
m | θ) with the prior via Bayes’ rule:
p x
(θ | (1)
, . . . , x( )
m
) =
p x
( (1)
, . . . , x( )
m
| θ θ
) (
p )
p x
( (1), . . . , x( )
m )
(5.67)
In the scenarios where Bayesian estimation is typically used, the prior begins as a
relatively uniform or Gaussian distribution with high entropy, and the observation
of the data usually causes the posterior to lose entropy and concentrate around a
few highly likely values of the parameters.
Relative to maximum likelihood estimation, Bayesian estimation offers two
important differences. First, unlike the maximum likelihood approach that makes
predictions using a point estimate of θ, the Bayesian approach is to make predictions
using a full distribution over θ. For example, after observing m examples, the
predicted distribution over the next data sample, x( +1)
m , is given by
p x
( ( +1)
m
| x(1)
, . . . , x( )
m
) =

p x
( ( +1)
m
| |
θ θ
) (
p x(1)
, . . . , x( )
m
) d .
θ (5.68)
Here each value of θ with positive probability density contributes to the prediction
of the next example, with the contribution weighted by the posterior density itself.
After having observed {x(1)
, . . . , x( )
m }, if we are still quite uncertain about the
value of θ, then this uncertainty is incorporated directly into any predictions we
might make.
In section , we discussed how the frequentist approach addresses the uncer-
5.4
tainty in a given point estimate of θ by evaluating its variance. The variance of
the estimator is an assessment of how the estimate might change with alternative
samplings of the observed data. The Bayesian answer to the question of how to deal
with the uncertainty in the estimator is to simply integrate over it, which tends to
protect well against overfitting. This integral is of course just an application of
the laws of probability, making the Bayesian approach simple to justify, while the
frequentist machinery for constructing an estimator is based on the rather ad hoc
decision to summarize all knowledge contained in the dataset with a single point
estimate.
The second important difference between the Bayesian approach to estimation
and the maximum likelihood approach is due to the contribution of the Bayesian
136
CHAPTER 5. MACHINE LEARNING BASICS
prior distribution. The prior has an influence by shifting probability mass density
towards regions of the parameter space that are preferred . In practice,
a priori
the prior often expresses a preference for models that are simpler or more smooth.
Critics of the Bayesian approach identify the prior as a source of subjective human
judgment impacting the predictions.
Bayesian methods typically generalize much better when limited training data
is available, but typically suffer from high computational cost when the number of
training examples is large.
Example: Bayesian Linear Regression Here we consider the Bayesian esti-
mation approach to learning the linear regression parameters. In linear regression,
we learn a linear mapping from an input vector x ∈ Rn to predict the value of a
scalar . The prediction is parametrized by the vector
y ∈ R w ∈ Rn:
ŷ = w
x. (5.69)
Given a set of m training samples (X( )
train , y( )
train ), we can express the prediction
of over the entire training set as:
y
ŷ( )
train
= X( )
train
w. (5.70)
Expressed as a Gaussian conditional distribution on y( )
train
, we have
p(y( )
train | X ( )
train , w y
) = (
N ( )
train
; X( )
train
w I
, ) (5.71)
∝ exp

−
1
2
(y( )
train
− X( )
train
w)
(y( )
train
− X( )
train
w)

,
(5.72)
where we follow the standard MSE formulation in assuming that the Gaussian
variance on y is one. In what follows, to reduce the notational burden, we refer to
(X( )
train , y( )
train ) ( )
as simply X y
, .
To determine the posterior distribution over the model parameter vector w, we
first need to specify a prior distribution. The prior should reflect our naive belief
about the value of these parameters. While it is sometimes difficult or unnatural
to express our prior beliefs in terms of the parameters of the model, in practice we
typically assume a fairly broad distribution expressing a high degree of uncertainty
about θ. For real-valued parameters it is common to use a Gaussian as a prior
distribution:
p( ) = ( ;
w N w µ0, Λ0) exp
∝

−
1
2
(w µ
− 0)
Λ−1
0 (w µ
− 0)

, (5.73)
137
CHAPTER 5. MACHINE LEARNING BASICS
where µ0 and Λ0 are the prior distribution mean vector and covariance matrix
respectively.1
With the prior thus specified, we can now proceed in determining the posterior
distribution over the model parameters.
p , p , p
(w X
| y) ∝ (y X
| w) ( )
w (5.74)
∝ exp

−
1
2
( )
y Xw
− 
( )
y Xw
−

exp

−
1
2
(w µ
− 0)
Λ−1
0 (w µ
− 0)

(5.75)
∝ exp

−
1
2

−2y
Xw w
+ 
X
Xw w
+ 
Λ−1
0 w µ
− 2 
0 Λ−1
0 w

.
(5.76)
We now define Λm =

XX + Λ−1
0
−1
and µm = Λm

Xy + Λ−1
0 µ0

. Using
these new variables, we find that the posterior may be rewritten as a Gaussian
distribution:
p ,
(w X
| y) exp
∝

−
1
2
(w µ
− m)
Λ−1
m (w µ
− m) +
1
2
µ
mΛ−1
m µm

(5.77)
∝ exp

−
1
2
(w µ
− m)
Λ−1
m (w µ
− m)

. (5.78)
All terms that do not include the parameter vector w have been omitted; they
are implied by the fact that the distribution must be normalized to integrate to .
1
Equation shows how to normalize a multivariate Gaussian distribution.
3.23
Examining this posterior distribution allows us to gain some intuition for the
effect of Bayesian inference. In most situations, we set µ0 to 0. If we set Λ0 = 1
α I,
then µm gives the same estimate of w as does frequentist linear regression with a
weight decay penalty of αww. One difference is that the Bayesian estimate is
undefined if α is set to zero—-we are not allowed to begin the Bayesian learning
process with an infinitely wide prior on w. The more important difference is that
the Bayesian estimate provides a covariance matrix, showing how likely all the
different values of are, rather than providing only the estimate
w µm.
5.6.1 Maximum (MAP) Estimation
A Posteriori
While the most principled approach is to make predictions using the full Bayesian
posterior distribution over the parameter θ, it is still often desirable to have a
1
Unless there is a reason to assume a particular covariance structure, we typically assume a
diagonal covariance matrix Λ0 = diag(λ0).
138
CHAPTER 5. MACHINE LEARNING BASICS
single point estimate. One common reason for desiring a point estimate is that
most operations involving the Bayesian posterior for most interesting models are
intractable, and a point estimate offers a tractable approximation. Rather than
simply returning to the maximum likelihood estimate, we can still gain some of
the benefit of the Bayesian approach by allowing the prior to influence the choice
of the point estimate. One rational way to do this is to choose the maximum
a posteriori (MAP) point estimate. The MAP estimate chooses the point of
maximal posterior probability (or maximal probability density in the more common
case of continuous ):
θ
θMAP = arg max
θ
p( ) = arg max
θ x
|
θ
log ( ) + log ( )
p x θ
| p θ . (5.79)
We recognize, above on the right hand side, log p(x θ
| ), i.e. the standard log-
likelihood term, and , corresponding to the prior distribution.
log ( )
p θ
As an example, consider a linear regression model with a Gaussian prior on
the weights w. If this prior is given by N(w;0, 1
λI2), then the log-prior term in
equation is proportional to the familiar
5.79 λw
w weight decay penalty, plus a
term that does not depend on w and does not affect the learning process. MAP
Bayesian inference with a Gaussian prior on the weights thus corresponds to weight
decay.
As with full Bayesian inference, MAP Bayesian inference has the advantage of
leveraging information that is brought by the prior and cannot be found in the
training data. This additional information helps to reduce the variance in the
MAP point estimate (in comparison to the ML estimate). However, it does so at
the price of increased bias.
Many regularized estimation strategies, such as maximum likelihood learning
regularized with weight decay, can be interpreted as making the MAP approxima-
tion to Bayesian inference. This view applies when the regularization consists of
adding an extra term to the objective function that corresponds to log p(θ ). Not
all regularization penalties correspond to MAP Bayesian inference. For example,
some regularizer terms may not be the logarithm of a probability distribution.
Other regularization terms depend on the data, which of course a prior probability
distribution is not allowed to do.
MAP Bayesian inference provides a straightforward way to design complicated
yet interpretable regularization terms. For example, a more complicated penalty
term can be derived by using a mixture of Gaussians, rather than a single Gaussian
distribution, as the prior (Nowlan and Hinton 1992
, ).
139
CHAPTER 5. MACHINE LEARNING BASICS
5.7 Supervised Learning Algorithms
Recall from section that supervised learning algorithms are, roughly speaking,
5.1.3
learning algorithms that learn to associate some input with some output, given a
training set of examples of inputs x and outputs y. In many cases the outputs
y may be difficult to collect automatically and must be provided by a human
“supervisor,” but the term still applies even when the training set targets were
collected automatically.
5.7.1 Probabilistic Supervised Learning
Most supervised learning algorithms in this book are based on estimating a
probability distribution p(y | x). We can do this simply by using maximum
likelihood estimation to find the best parameter vector θ for a parametric family
of distributions .
p y
( | x θ
; )
We have already seen that linear regression corresponds to the family
p y y
( | N
x θ
; ) = ( ; θ
x I
, ). (5.80)
We can generalize linear regression to the classification scenario by defining a
different family of probability distributions. If we have two classes, class 0 and
class 1, then we need only specify the probability of one of these classes. The
probability of class 1 determines the probability of class 0, because these two values
must add up to 1.
The normal distribution over real-valued numbers that we used for linear
regression is parametrized in terms of a mean. Any value we supply for this mean
is valid. A distribution over a binary variable is slightly more complicated, because
its mean must always be between 0 and 1. One way to solve this problem is to use
the logistic sigmoid function to squash the output of the linear function into the
interval (0, 1) and interpret that value as a probability:
p y σ
( = 1 ; ) =
| x θ (θ
x). (5.81)
This approach is known as logistic regression (a somewhat strange name since
we use the model for classification rather than regression).
In the case of linear regression, we were able to find the optimal weights by
solving the normal equations. Logistic regression is somewhat more difficult. There
is no closed-form solution for its optimal weights. Instead, we must search for
them by maximizing the log-likelihood. We can do this by minimizing the negative
log-likelihood (NLL) using gradient descent.
140
CHAPTER 5. MACHINE LEARNING BASICS
This same strategy can be applied to essentially any supervised learning problem,
by writing down a parametric family of conditional probability distributions over
the right kind of input and output variables.
5.7.2 Support Vector Machines
One of the most influential approaches to supervised learning is the support vector
machine ( , ;
Boser et al. 1992 Cortes and Vapnik 1995
, ). This model is similar to
logistic regression in that it is driven by a linear function w
x +b. Unlike logistic
regression, the support vector machine does not provide probabilities, but only
outputs a class identity. The SVM predicts that the positive class is present when
w
x + b is positive. Likewise, it predicts that the negative class is present when
w
x + b is negative.
One key innovation associated with support vector machines is the kernel
trick. The kernel trick consists of observing that many machine learning algorithms
can be written exclusively in terms of dot products between examples. For example,
it can be shown that the linear function used by the support vector machine can
be re-written as
w
x + = +
b b
m

i=1
αix
x( )
i
(5.82)
where x( )
i is a training example and α is a vector of coefficients. Rewriting the
learning algorithm this way allows us to replace x by the output of a given feature
function φ(x) and the dot product with a function k(x x
, ( )
i ) = φ(x)·φ(x( )
i ) called
a kernel. The · operator represents an inner product analogous to φ(x)φ(x( )
i ).
For some feature spaces, we may not use literally the vector inner product. In
some infinite dimensional spaces, we need to use other kinds of inner products, for
example, inner products based on integration rather than summation. A complete
development of these kinds of inner products is beyond the scope of this book.
After replacing dot products with kernel evaluations, we can make predictions
using the function
f b
( ) =
x +

i
αik ,
(x x( )
i
). (5.83)
This function is nonlinear with respect to x, but the relationship between φ(x)
and f (x) is linear. Also, the relationship between α and f(x) is linear. The
kernel-based function is exactly equivalent to preprocessing the data by applying
φ( )
x to all inputs, then learning a linear model in the new transformed space.
The kernel trick is powerful for two reasons. First, it allows us to learn models
that are nonlinear as a function of x using convex optimization techniques that are
141
CHAPTER 5. MACHINE LEARNING BASICS
guaranteed to converge efficiently. This is possible because we consider φ fixed and
optimize only α, i.e., the optimization algorithm can view the decision function
as being linear in a different space. Second, the kernel function k often admits
an implementation that is significantly more computational efficient than naively
constructing two vectors and explicitly taking their dot product.
φ( )
x
In some cases, φ(x) can even be infinite dimensional, which would result in
an infinite computational cost for the naive, explicit approach. In many cases,
k(x x
, 
) is a nonlinear, tractable function of x even when φ(x) is intractable. As
an example of an infinite-dimensional feature space with a tractable kernel, we
construct a feature mapping φ(x) over the non-negative integers x. Suppose that
this mapping returns a vector containing x ones followed by infinitely many zeros.
We can write a kernel function k(x, x( )
i ) = min(x, x( )
i ) that is exactly equivalent
to the corresponding infinite-dimensional dot product.
The most commonly used kernel is the Gaussian kernel
k , , σ
(u v u v
) = (
N − ; 0 2
I) (5.84)
where N(x;µ, Σ) is the standard normal density. This kernel is also known as
the radial basis function (RBF) kernel, because its value decreases along lines
in v space radiating outward from u. The Gaussian kernel corresponds to a dot
product in an infinite-dimensional space, but the derivation of this space is less
straightforward than in our example of the kernel over the integers.
min
We can think of the Gaussian kernel as performing a kind of template match-
ing. A training example x associated with training label y becomes a template
for class y. When a test point x
is near x according to Euclidean distance, the
Gaussian kernel has a large response, indicating that x
is very similar to the x
template. The model then puts a large weight on the associated training label y.
Overall, the prediction will combine many such training labels weighted by the
similarity of the corresponding training examples.
Support vector machines are not the only algorithm that can be enhanced
using the kernel trick. Many other linear models can be enhanced in this way. The
category of algorithms that employ the kernel trick is known as kernel machines
or kernel methods ( , ;
Williams and Rasmussen 1996 Schölkopf 1999
et al., ).
A major drawback to kernel machines is that the cost of evaluating the decision
function is linear in the number of training examples, because the i-th example
contributes a term αik(x x
, ( )
i ) to the decision function. Support vector machines
are able to mitigate this by learning an α vector that contains mostly zeros.
Classifying a new example then requires evaluating the kernel function only for
the training examples that have non-zero αi. These training examples are known
142
CHAPTER 5. MACHINE LEARNING BASICS
as support vectors.
Kernel machines also suffer from a high computational cost of training when
the dataset is large. We will revisit this idea in section . Kernel machines with
5.9
generic kernels struggle to generalize well. We will explain why in section . The
5.11
modern incarnation of deep learning was designed to overcome these limitations of
kernel machines. The current deep learning renaissance began when Hinton et al.
( ) demonstrated that a neural network could outperform the RBF kernel SVM
2006
on the MNIST benchmark.
5.7.3 Other Simple Supervised Learning Algorithms
We have already briefly encountered another non-probabilistic supervised learning
algorithm, nearest neighbor regression. More generally, k-nearest neighbors is
a family of techniques that can be used for classification or regression. As a
non-parametric learning algorithm, k-nearest neighbors is not restricted to a fixed
number of parameters. We usually think of the k-nearest neighbors algorithm
as not having any parameters, but rather implementing a simple function of the
training data. In fact, there is not even really a training stage or learning process.
Instead, at test time, when we want to produce an output y for a new test input x,
we find the k-nearest neighbors to x in the training data X. We then return the
average of the corresponding y values in the training set. This works for essentially
any kind of supervised learning where we can define an average over y values. In
the case of classification, we can average over one-hot code vectors c with cy = 1
and ci = 0 for all other values of i. We can then interpret the average over these
one-hot codes as giving a probability distribution over classes. As a non-parametric
learning algorithm, k-nearest neighbor can achieve very high capacity. For example,
suppose we have a multiclass classification task and measure performance with 0-1
loss. In this setting, -nearest neighbor converges to double the Bayes error as the
1
number of training examples approaches infinity. The error in excess of the Bayes
error results from choosing a single neighbor by breaking ties between equally
distant neighbors randomly. When there is infinite training data, all test points x
will have infinitely many training set neighbors at distance zero. If we allow the
algorithm to use all of these neighbors to vote, rather than randomly choosing one
of them, the procedure converges to the Bayes error rate. The high capacity of
k-nearest neighbors allows it to obtain high accuracy given a large training set.
However, it does so at high computational cost, and it may generalize very badly
given a small, finite training set. One weakness of k-nearest neighbors is that it
cannot learn that one feature is more discriminative than another. For example,
imagine we have a regression task with x ∈ R100
drawn from an isotropic Gaussian
143
CHAPTER 5. MACHINE LEARNING BASICS
distribution, but only a single variable x1 is relevant to the output. Suppose
further that this feature simply encodes the output directly, i.e. that y = x1 in all
cases. Nearest neighbor regression will not be able to detect this simple pattern.
The nearest neighbor of most points x will be determined by the large number of
features x2 through x100, not by the lone feature x1 . Thus the output on small
training sets will essentially be random.
144
CHAPTER 5. MACHINE LEARNING BASICS
0
1
01
111
0 1
011
1111
1110
110
10
010
00
1110 1111
110
10
01
00
010 011
11
111
11
Figure 5.7: Diagrams describing how a decision tree works. (Top)Each node of the tree
chooses to send the input example to the child node on the left (0) or or the child node on
the right (1). Internal nodes are drawn as circles and leaf nodes as squares. Each node is
displayed with a binary string identifier corresponding to its position in the tree, obtained
by appending a bit to its parent identifier (0=choose left or top, 1=choose right or bottom).
(Bottom)The tree divides space into regions. The 2D plane shows how a decision tree
might divide R2. The nodes of the tree are plotted in this plane, with each internal node
drawn along the dividing line it uses to categorize examples, and leaf nodes drawn in the
center of the region of examples they receive. The result is a piecewise-constant function,
with one piece per leaf. Each leaf requires at least one training example to define, so it is
not possible for the decision tree to learn a function that has more local maxima than the
number of training examples.
145
CHAPTER 5. MACHINE LEARNING BASICS
Another type of learning algorithm that also breaks the input space into regions
and has separate parameters for each region is the decision tree ( ,
Breiman et al.
1984) and its many variants. As shown in figure , each node of the decision
5.7
tree is associated with a region in the input space, and internal nodes break that
region into one sub-region for each child of the node (typically using an axis-aligned
cut). Space is thus sub-divided into non-overlapping regions, with a one-to-one
correspondence between leaf nodes and input regions. Each leaf node usually maps
every point in its input region to the same output. Decision trees are usually
trained with specialized algorithms that are beyond the scope of this book. The
learning algorithm can be considered non-parametric if it is allowed to learn a tree
of arbitrary size, though decision trees are usually regularized with size constraints
that turn them into parametric models in practice. Decision trees as they are
typically used, with axis-aligned splits and constant outputs within each node,
struggle to solve some problems that are easy even for logistic regression. For
example, if we have a two-class problem and the positive class occurs wherever
x2 > x1, the decision boundary is not axis-aligned. The decision tree will thus
need to approximate the decision boundary with many nodes, implementing a step
function that constantly walks back and forth across the true decision function
with axis-aligned steps.
As we have seen, nearest neighbor predictors and decision trees have many
limitations. Nonetheless, they are useful learning algorithms when computational
resources are constrained. We can also build intuition for more sophisticated
learning algorithms by thinking about the similarities and differences between
sophisticated algorithms and -NN or decision tree baselines.
k
See ( ), ( ), ( ) or other machine
Murphy 2012 Bishop 2006 Hastie et al. 2001
learning textbooks for more material on traditional supervised learning algorithms.
5.8 Unsupervised Learning Algorithms
Recall from section that unsupervised algorithms are those that experience
5.1.3
only “features” but not a supervision signal. The distinction between supervised
and unsupervised algorithms is not formally and rigidly defined because there is no
objective test for distinguishing whether a value is a feature or a target provided by
a supervisor. Informally, unsupervised learning refers to most attempts to extract
information from a distribution that do not require human labor to annotate
examples. The term is usually associated with density estimation, learning to
draw samples from a distribution, learning to denoise data from some distribution,
finding a manifold that the data lies near, or clustering the data into groups of
146
CHAPTER 5. MACHINE LEARNING BASICS
related examples.
A classic unsupervised learning task is to find the “best” representation of the
data. By ‘best’ we can mean different things, but generally speaking we are looking
for a representation that preserves as much information about x as possible while
obeying some penalty or constraint aimed at keeping the representation or
simpler
more accessible than itself.
x
There are multiple ways of defining a representation. Three of the
simpler
most common include lower dimensional representations, sparse representations
and independent representations. Low-dimensional representations attempt to
compress as much information about x as possible in a smaller representation.
Sparse representations ( , ; , ;
Barlow 1989 Olshausen and Field 1996 Hinton and
Ghahramani 1997
, ) embed the dataset into a representation whose entries are
mostly zeroes for most inputs. The use of sparse representations typically requires
increasing the dimensionality of the representation, so that the representation
becoming mostly zeroes does not discard too much information. This results in an
overall structure of the representation that tends to distribute data along the axes
of the representation space. Independent representations attempt to disentangle
the sources of variation underlying the data distribution such that the dimensions
of the representation are statistically independent.
Of course these three criteria are certainly not mutually exclusive. Low-
dimensional representations often yield elements that have fewer or weaker de-
pendencies than the original high-dimensional data. This is because one way to
reduce the size of a representation is to find and remove redundancies. Identifying
and removing more redundancy allows the dimensionality reduction algorithm to
achieve more compression while discarding less information.
The notion of representation is one of the central themes of deep learning and
therefore one of the central themes in this book. In this section, we develop some
simple examples of representation learning algorithms. Together, these example
algorithms show how to operationalize all three of the criteria above. Most of the
remaining chapters introduce additional representation learning algorithms that
develop these criteria in different ways or introduce other criteria.
5.8.1 Principal Components Analysis
In section , we saw that the principal components analysis algorithm provides
2.12
a means of compressing data. We can also view PCA as an unsupervised learning
algorithm that learns a representation of data. This representation is based on
two of the criteria for a simple representation described above. PCA learns a
147
CHAPTER 5. MACHINE LEARNING BASICS
− −
20 10 0 10 20
x1
−20
−10
0
10
20
x
2
− −
20 10 0 10 20
z1
−20
−10
0
10
20
z
2
Figure 5.8: PCA learns a linear projection that aligns the direction of greatest variance
with the axes of the new space. (Left)The original data consists of samples ofx. In this
space, the variance might occur along directions that are not axis-aligned. (Right)The
transformed data z= xW now varies most along the axis z1. The direction of second
most variance is now along z2.
representation that has lower dimensionality than the original input. It also learns
a representation whose elements have no linear correlation with each other. This
is a first step toward the criterion of learning representations whose elements are
statistically independent. To achieve full independence, a representation learning
algorithm must also remove the nonlinear relationships between variables.
PCA learns an orthogonal, linear transformation of the data that projects an
input x to a representation z as shown in figure . In section , we saw that
5.8 2.12
we could learn a one-dimensional representation that best reconstructs the original
data (in the sense of mean squared error) and that this representation actually
corresponds to the first principal component of the data. Thus we can use PCA
as a simple and effective dimensionality reduction method that preserves as much
of the information in the data as possible (again, as measured by least-squares
reconstruction error). In the following, we will study how the PCA representation
decorrelates the original data representation .
X
Let us consider the m n
× -dimensional design matrix X. We will assume that
the data has a mean of zero, E[x] = 0. If this is not the case, the data can easily
be centered by subtracting the mean from all examples in a preprocessing step.
The unbiased sample covariance matrix associated with is given by:
X
Var[ ] =
x
1
m − 1
X 
X. (5.85)
148
CHAPTER 5. MACHINE LEARNING BASICS
PCA finds a representation (through linear transformation) z = x
W where
Var[ ]
z is diagonal.
In section , we saw that the principal components of a design matrix
2.12 X
are given by the eigenvectors of XX. From this view,
X
X W W
= Λ 
. (5.86)
In this section, we exploit an alternative derivation of the principal components. The
principal components may also be obtained via the singular value decomposition.
Specifically, they are the right singular vectors of X . To see this, let W be the
right singular vectors in the decomposition X = U W
Σ . We then recover the
original eigenvector equation with as the eigenvector basis:
W
X 
X =

U W
Σ 

U W
Σ 
= WΣ2
W
. (5.87)
The SVD is helpful to show that PCA results in a diagonal Var[z]. Using the
SVD of , we can express the variance of as:
X X
Var[ ] =
x
1
m − 1
X
X (5.88)
=
1
m − 1
(U W
Σ 
)
U W
Σ 
(5.89)
=
1
m − 1
WΣ
U 
U W
Σ 
(5.90)
=
1
m − 1
WΣ2
W
, (5.91)
where we use the fact that U
U = I because the U matrix of the singular value
decomposition is defined to be orthogonal. This shows that if we take z = x
W,
we can ensure that the covariance of is diagonal as required:
z
Var[ ] =
z
1
m − 1
Z
Z (5.92)
=
1
m − 1
W
X 
XW (5.93)
=
1
m − 1
W
WΣ2
W
W (5.94)
=
1
m − 1
Σ2
, (5.95)
where this time we use the fact that W 
W = I, again from the definition of the
SVD.
149
CHAPTER 5. MACHINE LEARNING BASICS
The above analysis shows that when we project the data x to z, via the linear
transformation W, the resulting representation has a diagonal covariance matrix
(as given by Σ2
) which immediately implies that the individual elements of z are
mutually uncorrelated.
This ability of PCA to transform data into a representation where the elements
are mutually uncorrelated is a very important property of PCA. It is a simple
example of a representation that attempts to disentangle the unknown factors of
variation underlying the data. In the case of PCA, this disentangling takes the
form of finding a rotation of the input space (described by W) that aligns the
principal axes of variance with the basis of the new representation space associated
with .
z
While correlation is an important category of dependency between elements of
the data, we are also interested in learning representations that disentangle more
complicated forms of feature dependencies. For this, we will need more than what
can be done with a simple linear transformation.
5.8.2 -means Clustering
k
Another example of a simple representation learning algorithm isk-means clustering.
The k-means clustering algorithm divides the training set intok different clusters
of examples that are near each other. We can thus think of the algorithm as
providing a k-dimensional one-hot code vector h representing an input x. If x
belongs to cluster i, then hi = 1 and all other entries of the representation h are
zero.
The one-hot code provided by k-means clustering is an example of a sparse
representation, because the majority of its entries are zero for every input. Later,
we will develop other algorithms that learn more flexible sparse representations,
where more than one entry can be non-zero for each input x. One-hot codes
are an extreme example of sparse representations that lose many of the benefits
of a distributed representation. The one-hot code still confers some statistical
advantages (it naturally conveys the idea that all examples in the same cluster are
similar to each other) and it confers the computational advantage that the entire
representation may be captured by a single integer.
The k-means algorithm works by initializingk different centroids {µ(1), . . . , µ( )
k }
to different values, then alternating between two different steps until convergence.
In one step, each training example is assigned to cluster i, where i is the index of
the nearest centroid µ( )
i . In the other step, each centroid µ( )
i is updated to the
mean of all training examples x( )
j assigned to cluster .
i
150
CHAPTER 5. MACHINE LEARNING BASICS
One difficulty pertaining to clustering is that the clustering problem is inherently
ill-posed, in the sense that there is no single criterion that measures how well a
clustering of the data corresponds to the real world. We can measure properties of
the clustering such as the average Euclidean distance from a cluster centroid to the
members of the cluster. This allows us to tell how well we are able to reconstruct
the training data from the cluster assignments. We do not know how well the
cluster assignments correspond to properties of the real world. Moreover, there
may be many different clusterings that all correspond well to some property of
the real world. We may hope to find a clustering that relates to one feature but
obtain a different, equally valid clustering that is not relevant to our task. For
example, suppose that we run two clustering algorithms on a dataset consisting of
images of red trucks, images of red cars, images of gray trucks, and images of gray
cars. If we ask each clustering algorithm to find two clusters, one algorithm may
find a cluster of cars and a cluster of trucks, while another may find a cluster of
red vehicles and a cluster of gray vehicles. Suppose we also run a third clustering
algorithm, which is allowed to determine the number of clusters. This may assign
the examples to four clusters, red cars, red trucks, gray cars, and gray trucks. This
new clustering now at least captures information about both attributes, but it has
lost information about similarity. Red cars are in a different cluster from gray
cars, just as they are in a different cluster from gray trucks. The output of the
clustering algorithm does not tell us that red cars are more similar to gray cars
than they are to gray trucks. They are different from both things, and that is all
we know.
These issues illustrate some of the reasons that we may prefer a distributed
representation to a one-hot representation. A distributed representation could have
two attributes for each vehicle—one representing its color and one representing
whether it is a car or a truck. It is still not entirely clear what the optimal
distributed representation is (how can the learning algorithm know whether the
two attributes we are interested in are color and car-versus-truck rather than
manufacturer and age?) but having many attributes reduces the burden on the
algorithm to guess which single attribute we care about, and allows us to measure
similarity between objects in a fine-grained way by comparing many attributes
instead of just testing whether one attribute matches.
5.9 Stochastic Gradient Descent
Nearly all of deep learning is powered by one very important algorithm: stochastic
gradient descent or SGD. Stochastic gradient descent is an extension of the
151
CHAPTER 5. MACHINE LEARNING BASICS
gradient descent algorithm introduced in section .
4.3
A recurring problem in machine learning is that large training sets are necessary
for good generalization, but large training sets are also more computationally
expensive.
The cost function used by a machine learning algorithm often decomposes as a
sum over training examples of some per-example loss function. For example, the
negative conditional log-likelihood of the training data can be written as
J( ) =
θ Ex,y∼p̂data
L , y,
(x θ) =
1
m
m

i=1
L(x( )
i
, y( )
i
, θ) (5.96)
where is the per-example loss
L L , y, p y .
(x θ) = log
− ( | x θ
; )
For these additive cost functions, gradient descent requires computing
∇θ J( ) =
θ
1
m
m

i=1
∇θL(x( )
i
, y( )
i
, .
θ) (5.97)
The computational cost of this operation is O(m). As the training set size grows to
billions of examples, the time to take a single gradient step becomes prohibitively
long.
The insight of stochastic gradient descent is that the gradient is an expectation.
The expectation may be approximately estimated using a small set of samples.
Specifically, on each step of the algorithm, we can sample a minibatch of examples
B = {x(1), . . . , x(m)} drawn uniformly from the training set. The minibatch size
m is typically chosen to be a relatively small number of examples, ranging from
1 to a few hundred. Crucially, m is usually held fixed as the training set size m
grows. We may fit a training set with billions of examples using updates computed
on only a hundred examples.
The estimate of the gradient is formed as
g =
1
m
∇θ
m

i=1
L(x( )
i
, y( )
i
, .
θ) (5.98)
using examples from the minibatch . The stochastic gradient descent algorithm
B
then follows the estimated gradient downhill:
θ θ g
← −  , (5.99)
where is the learning rate.

152
CHAPTER 5. MACHINE LEARNING BASICS
Gradient descent in general has often been regarded as slow or unreliable. In
the past, the application of gradient descent to non-convex optimization problems
was regarded as foolhardy or unprincipled. Today, we know that the machine
learning models described in part work very well when trained with gradient
II
descent. The optimization algorithm may not be guaranteed to arrive at even a
local minimum in a reasonable amount of time, but it often finds a very low value
of the cost function quickly enough to be useful.
Stochastic gradient descent has many important uses outside the context of
deep learning. It is the main way to train large linear models on very large
datasets. For a fixed model size, the cost per SGD update does not depend on the
training set size m. In practice, we often use a larger model as the training set size
increases, but we are not forced to do so. The number of updates required to reach
convergence usually increases with training set size. However, as m approaches
infinity, the model will eventually converge to its best possible test error before
SGD has sampled every example in the training set. Increasing m further will not
extend the amount of training time needed to reach the model’s best possible test
error. From this point of view, one can argue that the asymptotic cost of training
a model with SGD is as a function of .
O(1) m
Prior to the advent of deep learning, the main way to learn nonlinear models
was to use the kernel trick in combination with a linear model. Many kernel learning
algorithms require constructing an m m
× matrix Gi,j = k(x( )
i , x( )
j ). Constructing
this matrix has computational cost O(m2), which is clearly undesirable for datasets
with billions of examples. In academia, starting in 2006, deep learning was
initially interesting because it was able to generalize to new examples better
than competing algorithms when trained on medium-sized datasets with tens of
thousands of examples. Soon after, deep learning garnered additional interest in
industry, because it provided a scalable way of training nonlinear models on large
datasets.
Stochastic gradient descent and many enhancements to it are described further
in chapter .
8
5.10 Building a Machine Learning Algorithm
Nearly all deep learning algorithms can be described as particular instances of
a fairly simple recipe: combine a specification of a dataset, a cost function, an
optimization procedure and a model.
For example, the linear regression algorithm combines a dataset consisting of
153
CHAPTER 5. MACHINE LEARNING BASICS
X y
and , the cost function
J , b
(w ) = −Ex,y∼p̂data
log pmodel( )
y | x , (5.100)
the model specification pmodel(y | x) = N(y;xw + b, 1), and, in most cases, the
optimization algorithm defined by solving for where the gradient of the cost is zero
using the normal equations.
By realizing that we can replace any of these components mostly independently
from the others, we can obtain a very wide variety of algorithms.
The cost function typically includes at least one term that causes the learning
process to perform statistical estimation. The most common cost function is the
negative log-likelihood, so that minimizing the cost function causes maximum
likelihood estimation.
The cost function may also include additional terms, such as regularization
terms. For example, we can add weight decay to the linear regression cost function
to obtain
J , b λ
(w ) = || ||
w 2
2 − Ex,y∼p̂data
log pmodel( )
y | x . (5.101)
This still allows closed-form optimization.
If we change the model to be nonlinear, then most cost functions can no longer
be optimized in closed form. This requires us to choose an iterative numerical
optimization procedure, such as gradient descent.
The recipe for constructing a learning algorithm by combining models, costs, and
optimization algorithms supports both supervised and unsupervised learning. The
linear regression example shows how to support supervised learning. Unsupervised
learning can be supported by defining a dataset that contains only X and providing
an appropriate unsupervised cost and model. For example, we can obtain the first
PCA vector by specifying that our loss function is
J( ) =
w Ex∼p̂data
|| − ||
x r( ; )
x w 2
2 (5.102)
while our model is defined to have w with norm one and reconstruction function
r( ) =
x wxw.
In some cases, the cost function may be a function that we cannot actually
evaluate, for computational reasons. In these cases, we can still approximately
minimize it using iterative numerical optimization so long as we have some way of
approximating its gradients.
Most machine learning algorithms make use of this recipe, though it may not
immediately be obvious. If a machine learning algorithm seems especially unique or
154
CHAPTER 5. MACHINE LEARNING BASICS
hand-designed, it can usually be understood as using a special-case optimizer. Some
models such as decision trees or k-means require special-case optimizers because
their cost functions have flat regions that make them inappropriate for minimization
by gradient-based optimizers. Recognizing that most machine learning algorithms
can be described using this recipe helps to see the different algorithms as part of a
taxonomy of methods for doing related tasks that work for similar reasons, rather
than as a long list of algorithms that each have separate justifications.
5.11 Challenges Motivating Deep Learning
The simple machine learning algorithms described in this chapter work very well on
a wide variety of important problems. However, they have not succeeded in solving
the central problems in AI, such as recognizing speech or recognizing objects.
The development of deep learning was motivated in part by the failure of
traditional algorithms to generalize well on such AI tasks.
This section is about how the challenge of generalizing to new examples becomes
exponentially more difficult when working with high-dimensional data, and how
the mechanisms used to achieve generalization in traditional machine learning
are insufficient to learn complicated functions in high-dimensional spaces. Such
spaces also often impose high computational costs. Deep learning was designed to
overcome these and other obstacles.
5.11.1 The Curse of Dimensionality
Many machine learning problems become exceedingly difficult when the number
of dimensions in the data is high. This phenomenon is known as the curse of
dimensionality. Of particular concern is that the number of possible distinct
configurations of a set of variables increases exponentially as the number of variables
increases.
155
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.9: As the number of relevant dimensions of the data increases (from left to
right), the number of configurations of interest may grow exponentially. (Left)In this
one-dimensional example, we have one variable for which we only care to distinguish 10
regions of interest. With enough examples falling within each of these regions (each region
corresponds to a cell in the illustration), learning algorithms can easily generalize correctly.
A straightforward way to generalize is to estimate the value of the target function within
each region (and possibly interpolate between neighboring regions). With 2
(Center)
dimensions it is more difficult to distinguish 10 different values of each variable. We need
to keep track of up to 10×10=100 regions, and we need at least that many examples to
cover all those regions. With 3 dimensions this grows to
(Right) 10
3
= 1000 regions and at
least that many examples. For d dimensions and v values to be distinguished along each
axis, we seem to need O(vd
) regions and examples. This is an instance of the curse of
dimensionality. Figure graciously provided by Nicolas Chapados.
The curse of dimensionality arises in many places in computer science, and
especially so in machine learning.
One challenge posed by the curse of dimensionality is a statistical challenge.
As illustrated in figure , a statistical challenge arises because the number of
5.9
possible configurations of x is much larger than the number of training examples.
To understand the issue, let us consider that the input space is organized into a
grid, like in the figure. We can describe low-dimensional space with a low number
of grid cells that are mostly occupied by the data. When generalizing to a new data
point, we can usually tell what to do simply by inspecting the training examples
that lie in the same cell as the new input. For example, if estimating the probability
density at some point x, we can just return the number of training examples in
the same unit volume cell as x, divided by the total number of training examples.
If we wish to classify an example, we can return the most common class of training
examples in the same cell. If we are doing regression we can average the target
values observed over the examples in that cell. But what about the cells for which
we have seen no example? Because in high-dimensional spaces the number of
configurations is huge, much larger than our number of examples, a typical grid cell
has no training example associated with it. How could we possibly say something
156
CHAPTER 5. MACHINE LEARNING BASICS
meaningful about these new configurations? Many traditional machine learning
algorithms simply assume that the output at a new point should be approximately
the same as the output at the nearest training point.
5.11.2 Local Constancy and Smoothness Regularization
In order to generalize well, machine learning algorithms need to be guided by prior
beliefs about what kind of function they should learn. Previously, we have seen
these priors incorporated as explicit beliefs in the form of probability distributions
over parameters of the model. More informally, we may also discuss prior beliefs as
directly influencing the itself and only indirectly acting on the parameters
function
via their effect on the function. Additionally, we informally discuss prior beliefs as
being expressed implicitly, by choosing algorithms that are biased toward choosing
some class of functions over another, even though these biases may not be expressed
(or even possible to express) in terms of a probability distribution representing our
degree of belief in various functions.
Among the most widely used of these implicit “priors” is the smoothness
prior or local constancy prior. This prior states that the function we learn
should not change very much within a small region.
Many simpler algorithms rely exclusively on this prior to generalize well, and
as a result they fail to scale to the statistical challenges involved in solving AI-
level tasks. Throughout this book, we will describe how deep learning introduces
additional (explicit and implicit) priors in order to reduce the generalization
error on sophisticated tasks. Here, we explain why the smoothness prior alone is
insufficient for these tasks.
There are many different ways to implicitly or explicitly express a prior belief
that the learned function should be smooth or locally constant. All of these different
methods are designed to encourage the learning process to learn a function f∗ that
satisfies the condition
f∗
( )
x ≈ f ∗
( + )
x  (5.103)
for most configurations x and small change . In other words, if we know a good
answer for an input x (for example, if x is a labeled training example) then that
answer is probably good in the neighborhood of x. If we have several good answers
in some neighborhood we would combine them (by some form of averaging or
interpolation) to produce an answer that agrees with as many of them as much as
possible.
An extreme example of the local constancy approach is the k-nearest neighbors
family of learning algorithms. These predictors are literally constant over each
157
CHAPTER 5. MACHINE LEARNING BASICS
region containing all the points x that have the same set of k nearest neighbors in
the training set. For k = 1, the number of distinguishable regions cannot be more
than the number of training examples.
While the k-nearest neighbors algorithm copies the output from nearby training
examples, most kernel machines interpolate between training set outputs associated
with nearby training examples. An important class of kernels is the family oflocal
kernels where k(u v
, ) is large when u = v and decreases as u and v grow farther
apart from each other. A local kernel can be thought of as a similarity function
that performs template matching, by measuring how closely a test example x
resembles each training example x( )
i
. Much of the modern motivation for deep
learning is derived from studying the limitations of local template matching and
how deep models are able to succeed in cases where local template matching fails
( , ).
Bengio et al. 2006b
Decision trees also suffer from the limitations of exclusively smoothness-based
learning because they break the input space into as many regions as there are
leaves and use a separate parameter (or sometimes many parameters for extensions
of decision trees) in each region. If the target function requires a tree with at
least n leaves to be represented accurately, then at least n training examples are
required to fit the tree. A multiple of n is needed to achieve some level of statistical
confidence in the predicted output.
In general, to distinguish O(k) regions in input space, all of these methods
require O(k) examples. Typically there are O(k) parameters, with O(1) parameters
associated with each of the O(k) regions. The case of a nearest neighbor scenario,
where each training example can be used to define at most one region, is illustrated
in figure .
5.10
Is there a way to represent a complex function that has many more regions
to be distinguished than the number of training examples? Clearly, assuming
only smoothness of the underlying function will not allow a learner to do that.
For example, imagine that the target function is a kind of checkerboard. A
checkerboard contains many variations but there is a simple structure to them.
Imagine what happens when the number of training examples is substantially
smaller than the number of black and white squares on the checkerboard. Based
on only local generalization and the smoothness or local constancy prior, we would
be guaranteed to correctly guess the color of a new point if it lies within the same
checkerboard square as a training example. There is no guarantee that the learner
could correctly extend the checkerboard pattern to points lying in squares that do
not contain training examples. With this prior alone, the only information that an
example tells us is the color of its square, and the only way to get the colors of the
158
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.10: Illustration of how the nearest neighbor algorithm breaks up the input space
into regions. An example (represented here by a circle) within each region defines the
region boundary (represented here by the lines). They value associated with each example
defines what the output should be for all points within the corresponding region. The
regions defined by nearest neighbor matching form a geometric pattern called a Voronoi
diagram. The number of these contiguous regions cannot grow faster than the number
of training examples. While this figure illustrates the behavior of the nearest neighbor
algorithm specifically, other machine learning algorithms that rely exclusively on the
local smoothness prior for generalization exhibit similar behaviors: each training example
only informs the learner about how to generalize in some neighborhood immediately
surrounding that example.
159
CHAPTER 5. MACHINE LEARNING BASICS
entire checkerboard right is to cover each of its cells with at least one example.
The smoothness assumption and the associated non-parametric learning algo-
rithms work extremely well so long as there are enough examples for the learning
algorithm to observe high points on most peaks and low points on most valleys
of the true underlying function to be learned. This is generally true when the
function to be learned is smooth enough and varies in few enough dimensions.
In high dimensions, even a very smooth function can change smoothly but in a
different way along each dimension. If the function additionally behaves differently
in different regions, it can become extremely complicated to describe with a set of
training examples. If the function is complicated (we want to distinguish a huge
number of regions compared to the number of examples), is there any hope to
generalize well?
The answer to both of these questions—whether it is possible to represent
a complicated function efficiently, and whether it is possible for the estimated
function to generalize well to new inputs—is yes. The key insight is that a very
large number of regions, e.g., O(2k
), can be defined with O(k) examples, so long
as we introduce some dependencies between the regions via additional assumptions
about the underlying data generating distribution. In this way, we can actually
generalize non-locally ( , ; , ). Many
Bengio and Monperrus 2005 Bengio et al. 2006c
different deep learning algorithms provide implicit or explicit assumptions that are
reasonable for a broad range of AI tasks in order to capture these advantages.
Other approaches to machine learning often make stronger, task-specific as-
sumptions. For example, we could easily solve the checkerboard task by providing
the assumption that the target function is periodic. Usually we do not include such
strong, task-specific assumptions into neural networks so that they can generalize
to a much wider variety of structures. AI tasks have structure that is much too
complex to be limited to simple, manually specified properties such as periodicity,
so we want learning algorithms that embody more general-purpose assumptions.
The core idea in deep learning is that we assume that the data was generated by
the composition of factors or features, potentially at multiple levels in a hierarchy.
Many other similarly generic assumptions can further improve deep learning al-
gorithms. These apparently mild assumptions allow an exponential gain in the
relationship between the number of examples and the number of regions that can
be distinguished. These exponential gains are described more precisely in sections
6.4.1 15.4 15.5
, and . The exponential advantages conferred by the use of deep,
distributed representations counter the exponential challenges posed by the curse
of dimensionality.
160
CHAPTER 5. MACHINE LEARNING BASICS
5.11.3 Manifold Learning
An important concept underlying many ideas in machine learning is that of a
manifold.
A manifold is a connected region. Mathematically, it is a set of points,
associated with a neighborhood around each point. From any given point, the
manifold locally appears to be a Euclidean space. In everyday life, we experience
the surface of the world as a 2-D plane, but it is in fact a spherical manifold in
3-D space.
The definition of a neighborhood surrounding each point implies the existence
of transformations that can be applied to move on the manifold from one position
to a neighboring one. In the example of the world’s surface as a manifold, one can
walk north, south, east, or west.
Although there is a formal mathematical meaning to the term “manifold,” in
machine learning it tends to be used more loosely to designate a connected set
of points that can be approximated well by considering only a small number of
degrees of freedom, or dimensions, embedded in a higher-dimensional space. Each
dimension corresponds to a local direction of variation. See figure for an
5.11
example of training data lying near a one-dimensional manifold embedded in two-
dimensional space. In the context of machine learning, we allow the dimensionality
of the manifold to vary from one point to another. This often happens when a
manifold intersects itself. For example, a figure eight is a manifold that has a single
dimension in most places but two dimensions at the intersection at the center.
0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0
. . . . . . . .
−1 0
.
−0 5
.
0 0
.
0 5
.
1 0
.
1 5
.
2 0
.
2 5
.
Figure 5.11: Data sampled from a distribution in a two-dimensional space that is actually
concentrated near a one-dimensional manifold, like a twisted string. The solid line indicates
the underlying manifold that the learner should infer.
161
CHAPTER 5. MACHINE LEARNING BASICS
Many machine learning problems seem hopeless if we expect the machine
learning algorithm to learn functions with interesting variations across all of Rn
.
Manifold learning algorithms surmount this obstacle by assuming that most
of Rn
consists of invalid inputs, and that interesting inputs occur only along
a collection of manifolds containing a small subset of points, with interesting
variations in the output of the learned function occurring only along directions
that lie on the manifold, or with interesting variations happening only when we
move from one manifold to another. Manifold learning was introduced in the case
of continuous-valued data and the unsupervised learning setting, although this
probability concentration idea can be generalized to both discrete data and the
supervised learning setting: the key assumption remains that probability mass is
highly concentrated.
The assumption that the data lies along a low-dimensional manifold may not
always be correct or useful. We argue that in the context of AI tasks, such as
those that involve processing images, sounds, or text, the manifold assumption is
at least approximately correct. The evidence in favor of this assumption consists
of two categories of observations.
The first observation in favor of the manifold hypothesis is that the proba-
bility distribution over images, text strings, and sounds that occur in real life is
highly concentrated. Uniform noise essentially never resembles structured inputs
from these domains. Figure shows how, instead, uniformly sampled points
5.12
look like the patterns of static that appear on analog television sets when no signal
is available. Similarly, if you generate a document by picking letters uniformly at
random, what is the probability that you will get a meaningful English-language
text? Almost zero, again, because most of the long sequences of letters do not
correspond to a natural language sequence: the distribution of natural language
sequences occupies a very small volume in the total space of sequences of letters.
162
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.12: Sampling images uniformly at random (by randomly picking each pixel
according to a uniform distribution) gives rise to noisy images. Although there is a non-
zero probability to generate an image of a face or any other object frequently encountered
in AI applications, we never actually observe this happening in practice. This suggests
that the images encountered in AI applications occupy a negligible proportion of the
volume of image space.
Of course, concentrated probability distributions are not sufficient to show
that the data lies on a reasonably small number of manifolds. We must also
establish that the examples we encounter are connected to each other by other
163
CHAPTER 5. MACHINE LEARNING BASICS
examples, with each example surrounded by other highly similar examples that
may be reached by applying transformations to traverse the manifold. The second
argument in favor of the manifold hypothesis is that we can also imagine such
neighborhoods and transformations, at least informally. In the case of images, we
can certainly think of many possible transformations that allow us to trace out a
manifold in image space: we can gradually dim or brighten the lights, gradually
move or rotate objects in the image, gradually alter the colors on the surfaces of
objects, etc. It remains likely that there are multiple manifolds involved in most
applications. For example, the manifold of images of human faces may not be
connected to the manifold of images of cat faces.
These thought experiments supporting the manifold hypotheses convey some in-
tuitive reasons supporting it. More rigorous experiments (Cayton 2005 Narayanan
, ;
and Mitter 2010 Schölkopf 1998 Roweis and Saul 2000 Tenenbaum
, ; et al., ; , ; et al.,
2000 Brand 2003 Belkin and Niyogi 2003 Donoho and Grimes 2003 Weinberger
; , ; , ; , ;
and Saul 2004
, ) clearly support the hypothesis for a large class of datasets of
interest in AI.
When the data lies on a low-dimensional manifold, it can be most natural
for machine learning algorithms to represent the data in terms of coordinates on
the manifold, rather than in terms of coordinates in Rn
. In everyday life, we can
think of roads as 1-D manifolds embedded in 3-D space. We give directions to
specific addresses in terms of address numbers along these 1-D roads, not in terms
of coordinates in 3-D space. Extracting these manifold coordinates is challenging,
but holds the promise to improve many machine learning algorithms. This general
principle is applied in many contexts. Figure shows the manifold structure of
5.13
a dataset consisting of faces. By the end of this book, we will have developed the
methods necessary to learn such a manifold structure. In figure , we will see
20.6
how a machine learning algorithm can successfully accomplish this goal.
This concludes part , which has provided the basic concepts in mathematics
I
and machine learning which are employed throughout the remaining parts of the
book. You are now prepared to embark upon your study of deep learning.
164
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.13: Training examples from the QMUL Multiview Face Dataset ( , )
Gong et al. 2000
for which the subjects were asked to move in such a way as to cover the two-dimensional
manifold corresponding to two angles of rotation. We would like learning algorithms to be
able to discover and disentangle such manifold coordinates. Figure illustrates such a
20.6
feat.
165
Part II
Deep Networks: Modern
Practices
166
This part of the book summarizes the state of modern deep learning as it is
used to solve practical applications.
Deep learning has a long history and many aspirations. Several approaches
have been proposed that have yet to entirely bear fruit. Several ambitious goals
have yet to be realized. These less-developed branches of deep learning appear in
the final part of the book.
This part focuses only on those approaches that are essentially working tech-
nologies that are already used heavily in industry.
Modern deep learning provides a very powerful framework for supervised
learning. By adding more layers and more units within a layer, a deep network can
represent functions of increasing complexity. Most tasks that consist of mapping an
input vector to an output vector, and that are easy for a person to do rapidly, can
be accomplished via deep learning, given sufficiently large models and sufficiently
large datasets of labeled training examples. Other tasks, that can not be described
as associating one vector to another, or that are difficult enough that a person
would require time to think and reflect in order to accomplish the task, remain
beyond the scope of deep learning for now.
This part of the book describes the core parametric function approximation
technology that is behind nearly all modern practical applications of deep learning.
We begin by describing the feedforward deep network model that is used to
represent these functions. Next, we present advanced techniques for regularization
and optimization of such models. Scaling these models to large inputs such as high
resolution images or long temporal sequences requires specialization. We introduce
the convolutional network for scaling to large images and the recurrent neural
network for processing temporal sequences. Finally, we present general guidelines
for the practical methodology involved in designing, building, and configuring an
application involving deep learning, and review some of the applications of deep
learning.
These chapters are the most important for a practitioner—someone who wants
to begin implementing and using deep learning algorithms to solve real-world
problems today.
167
Chapter 6
Deep Feedforward Networks
Deep feedforward networks, also often called feedforward neural networks,
or multilayer perceptrons (MLPs), are the quintessential deep learning models.
The goal of a feedforward network is to approximate some functionf∗
. For example,
for a classifier, y = f∗
(x) maps an input x to a category y. A feedforward network
defines a mapping y = f(x;θ) and learns the value of the parameters θ that result
in the best function approximation.
These models are called feedforward because information flows through the
function being evaluated from x, through the intermediate computations used to
define f, and finally to the output y. There are no feedback connections in which
outputs of the model are fed back into itself. When feedforward neural networks
are extended to include feedback connections, they are called recurrent neural
networks, presented in chapter .
10
Feedforward networks are of extreme importance to machine learning practi-
tioners. They form the basis of many important commercial applications. For
example, the convolutional networks used for object recognition from photos are a
specialized kind of feedforward network. Feedforward networks are a conceptual
stepping stone on the path to recurrent networks, which power many natural
language applications.
Feedforward neural networks are called networks because they are typically
represented by composing together many different functions. The model is asso-
ciated with a directed acyclic graph describing how the functions are composed
together. For example, we might have three functions f(1), f(2), and f(3) connected
in a chain, to form f(x) = f(3)(f(2)(f(1)(x))). These chain structures are the most
commonly used structures of neural networks. In this case, f(1) is called the first
layer of the network, f(2)
is called the second layer, and so on. The overall
168
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
length of the chain gives the depth of the model. It is from this terminology that
the name “deep learning” arises. The final layer of a feedforward network is called
the output layer. During neural network training, we drive f(x) to match f∗
(x).
The training data provides us with noisy, approximate examples of f∗
(x) evaluated
at different training points. Each example x is accompanied by a label y f
≈ ∗
(x).
The training examples specify directly what the output layer must do at each point
x; it must produce a value that is close to y. The behavior of the other layers is
not directly specified by the training data. The learning algorithm must decide
how to use those layers to produce the desired output, but the training data does
not say what each individual layer should do. Instead, the learning algorithm must
decide how to use these layers to best implement an approximation of f∗. Because
the training data does not show the desired output for each of these layers, these
layers are called hidden layers.
Finally, these networks are called neural because they are loosely inspired by
neuroscience. Each hidden layer of the network is typically vector-valued. The
dimensionality of these hidden layers determines the width of the model. Each
element of the vector may be interpreted as playing a role analogous to a neuron.
Rather than thinking of the layer as representing a single vector-to-vector function,
we can also think of the layer as consisting of many units that act in parallel,
each representing a vector-to-scalar function. Each unit resembles a neuron in
the sense that it receives input from many other units and computes its own
activation value. The idea of using many layers of vector-valued representation
is drawn from neuroscience. The choice of the functions f( )
i
(x) used to compute
these representations is also loosely guided by neuroscientific observations about
the functions that biological neurons compute. However, modern neural network
research is guided by many mathematical and engineering disciplines, and the
goal of neural networks is not to perfectly model the brain. It is best to think of
feedforward networks as function approximation machines that are designed to
achieve statistical generalization, occasionally drawing some insights from what we
know about the brain, rather than as models of brain function.
One way to understand feedforward networks is to begin with linear models
and consider how to overcome their limitations. Linear models, such as logistic
regression and linear regression, are appealing because they may be fit efficiently
and reliably, either in closed form or with convex optimization. Linear models also
have the obvious defect that the model capacity is limited to linear functions, so
the model cannot understand the interaction between any two input variables.
To extend linear models to represent nonlinear functions of x, we can apply
the linear model not to x itself but to a transformed input φ(x), where φ is a
169
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
nonlinear transformation. Equivalently, we can apply the kernel trick described in
section , to obtain a nonlinear learning algorithm based on implicitly applying
5.7.2
the φ mapping. We can think of φ as providing a set of features describing x, or
as providing a new representation for .
x
The question is then how to choose the mapping .
φ
1. One option is to use a very generic φ, such as the infinite-dimensional φ that
is implicitly used by kernel machines based on the RBF kernel. If φ(x) is
of high enough dimension, we can always have enough capacity to fit the
training set, but generalization to the test set often remains poor. Very
generic feature mappings are usually based only on the principle of local
smoothness and do not encode enough prior information to solve advanced
problems.
2. Another option is to manually engineer φ. Until the advent of deep learning,
this was the dominant approach. This approach requires decades of human
effort for each separate task, with practitioners specializing in different
domains such as speech recognition or computer vision, and with little
transfer between domains.
3. The strategy of deep learning is to learn φ. In this approach, we have a model
y = f(x;θ w
, ) = φ(x; θ)w. We now have parameters θ that we use to learn
φ from a broad class of functions, and parameters w that map from φ(x) to
the desired output. This is an example of a deep feedforward network, with
φ defining a hidden layer. This approach is the only one of the three that
gives up on the convexity of the training problem, but the benefits outweigh
the harms. In this approach, we parametrize the representation as φ(x; θ)
and use the optimization algorithm to find the θ that corresponds to a good
representation. If we wish, this approach can capture the benefit of the first
approach by being highly generic—we do so by using a very broad family
φ(x;θ). This approach can also capture the benefit of the second approach.
Human practitioners can encode their knowledge to help generalization by
designing families φ(x; θ) that they expect will perform well. The advantage
is that the human designer only needs to find the right general function
family rather than finding precisely the right function.
This general principle of improving models by learning features extends beyond
the feedforward networks described in this chapter. It is a recurring theme of deep
learning that applies to all of the kinds of models described throughout this book.
Feedforward networks are the application of this principle to learning deterministic
170
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
mappings from x to y that lack feedback connections. Other models presented
later will apply these principles to learning stochastic mappings, learning functions
with feedback, and learning probability distributions over a single vector.
We begin this chapter with a simple example of a feedforward network. Next,
we address each of the design decisions needed to deploy a feedforward network.
First, training a feedforward network requires making many of the same design
decisions as are necessary for a linear model: choosing the optimizer, the cost
function, and the form of the output units. We review these basics of gradient-based
learning, then proceed to confront some of the design decisions that are unique
to feedforward networks. Feedforward networks have introduced the concept of a
hidden layer, and this requires us to choose the activation functions that will
be used to compute the hidden layer values. We must also design the architecture
of the network, including how many layers the network should contain, how these
layers should be connected to each other, and how many units should be in
each layer. Learning in deep neural networks requires computing the gradients
of complicated functions. We present the back-propagation algorithm and its
modern generalizations, which can be used to efficiently compute these gradients.
Finally, we close with some historical perspective.
6.1 Example: Learning XOR
To make the idea of a feedforward network more concrete, we begin with an
example of a fully functioning feedforward network on a very simple task: learning
the XOR function.
The XOR function (“exclusive or”) is an operation on two binary values, x1
and x2. When exactly one of these binary values is equal to , the XOR function
1
returns . Otherwise, it returns 0. The XOR function provides the target function
1
y = f∗
(x) that we want to learn. Our model provides a function y = f(x;θ) and
our learning algorithm will adapt the parameters θ to make f as similar as possible
to f∗
.
In this simple example, we will not be concerned with statistical generalization.
We want our network to perform correctly on the four points X = {[0, 0], [0,1],
[1,0], and [1,1]}. We will train the network on all four of these points. The
only challenge is to fit the training set.
We can treat this problem as a regression problem and use a mean squared
error loss function. We choose this loss function to simplify the math for this
example as much as possible. In practical applications, MSE is usually not an
171
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
appropriate cost function for modeling binary data. More appropriate approaches
are described in section .
6.2.2.2
Evaluated on our whole training set, the MSE loss function is
J( ) =
θ
1
4

x∈X
(f∗
( ) ( ; ))
x − f x θ 2
. (6.1)
Now we must choose the form of our model, f(x;θ). Suppose that we choose
a linear model, with consisting of and . Our model is defined to be
θ w b
f , b
( ;
x w ) = x
w + b. (6.2)
We can minimize J(θ) in closed form with respect to w and b using the normal
equations.
After solving the normal equations, we obtain w = 0 and b = 1
2. The linear
model simply outputs 0.5 everywhere. Why does this happen? Figure shows
6.1
how a linear model is not able to represent the XOR function. One way to solve
this problem is to use a model that learns a different feature space in which a
linear model is able to represent the solution.
Specifically, we will introduce a very simple feedforward network with one
hidden layer containing two hidden units. See figure for an illustration of
6.2
this model. This feedforward network has a vector of hidden units h that are
computed by a function f(1)
(x; W c
, ). The values of these hidden units are then
used as the input for a second layer. The second layer is the output layer of the
network. The output layer is still just a linear regression model, but now it is
applied to h rather than to x. The network now contains two functions chained
together: h = f(1)
(x;W c
, ) and y = f(2)
(h; w, b), with the complete model being
f , , , b f
( ;
x W c w ) = (2)
(f (1)
( ))
x .
What function should f(1) compute? Linear models have served us well so far,
and it may be tempting to make f(1) be linear as well. Unfortunately, if f(1) were
linear, then the feedforward network as a whole would remain a linear function of
its input. Ignoring the intercept terms for the moment, suppose f(1)
(x) = W
x
and f(2)
(h) = h
w. Then f (x) = w
W 
x. We could represent this function as
f( ) =
x x
w
where w
= W w.
Clearly, we must use a nonlinear function to describe the features. Most neural
networks do so using an affine transformation controlled by learned parameters,
followed by a fixed, nonlinear function called an activation function. We use that
strategy here, by defining h = g(W
x + c), where W provides the weights of a
linear transformation and c the biases. Previously, to describe a linear regression
172
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
0 1
x1
0
1
x
2
Original space
x
0 1 2
h1
0
1
h
2
Learned space
h
Figure 6.1: Solving the XOR problem by learning a representation. The bold numbers
printed on the plot indicate the value that the learned function must output at each point.
(Left)A linear model applied directly to the original input cannot implement the XOR
function. When x1 = 0, the model’s output must increase as x2 increases. When x1 = 1,
the model’s output must decrease as x2 increases. A linear model must apply a fixed
coefficient w2 to x2. The linear model therefore cannot use the value of x1 to change
the coefficient on x2 and cannot solve this problem. (Right)In the transformed space
represented by the features extracted by a neural network, a linear model can now solve
the problem. In our example solution, the two points that must have output have been
1
collapsed into a single point in feature space. In other words, the nonlinear features have
mapped both x = [1, 0]
and x = [0,1]
to a single point in feature space, h = [1 ,0]
.
The linear model can now describe the function as increasing in h1 and decreasing in h2.
In this example, the motivation for learning the feature space is only to make the model
capacity greater so that it can fit the training set. In more realistic applications, learned
representations can also help the model to generalize.
173
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
y
y
h
h
x
x
W
w
y
y
h1
h1
x1
x1
h2
h2
x2
x2
Figure 6.2: An example of a feedforward network, drawn in two different styles. Specifically,
this is the feedforward network we use to solve the XOR example. It has a single hidden
layer containing two units. (Left)In this style, we draw every unit as a node in the graph.
This style is very explicit and unambiguous but for networks larger than this example
it can consume too much space. In this style, we draw a node in the graph for
(Right)
each entire vector representing a layer’s activations. This style is much more compact.
Sometimes we annotate the edges in this graph with the name of the parameters that
describe the relationship between two layers. Here, we indicate that a matrixW describes
the mapping from x to h, and a vector w describes the mapping from h to y. We
typically omit the intercept parameters associated with each layer when labeling this kind
of drawing.
model, we used a vector of weights and a scalar bias parameter to describe an
affine transformation from an input vector to an output scalar. Now, we describe
an affine transformation from a vector x to a vector h, so an entire vector of bias
parameters is needed. The activation function g is typically chosen to be a function
that is applied element-wise, with hi = g(xW:,i +ci). In modern neural networks,
the default recommendation is to use the rectified linear unit or ReLU (Jarrett
et al. et al.
, ; , ;
2009 Nair and Hinton 2010 Glorot , ) defined by the activation
2011a
function depicted in figure .
g z , z
( ) = max 0
{ } 6.3
We can now specify our complete network as
f , , , b
( ;
x W c w ) = w
max 0
{ , W 
x c
+ } + b. (6.3)
We can now specify a solution to the XOR problem. Let
W =

1 1
1 1

, (6.4)
c =

0
−1

, (6.5)
174
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
0
z
0
g
z
(
)
=
max
0
{
,
z}
Figure 6.3: The rectified linear activation function. This activation function is the default
activation function recommended for use with most feedforward neural networks. Applying
this function to the output of a linear transformation yields a nonlinear transformation.
However, the function remains very close to linear, in the sense that is a piecewise linear
function with two linear pieces. Because rectified linear units are nearly linear, they
preserve many of the properties that make linear models easy to optimize with gradient-
based methods. They also preserve many of the properties that make linear models
generalize well. A common principle throughout computer science is that we can build
complicated systems from minimal components. Much as a Turing machine’s memory
needs only to be able to store 0 or 1 states, we can build a universal function approximator
from rectified linear functions.
175
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
w =

1
−2

, (6.6)
and .
b = 0
We can now walk through the way that the model processes a batch of inputs.
Let X be the design matrix containing all four points in the binary input space,
with one example per row:
X =




0 0
0 1
1 0
1 1



. (6.7)
The first step in the neural network is to multiply the input matrix by the first
layer’s weight matrix:
XW =




0 0
1 1
1 1
2 2



 . (6.8)
Next, we add the bias vector , to obtain
c




0 1
−
1 0
1 0
2 1



. (6.9)
In this space, all of the examples lie along a line with slope . As we move along
1
this line, the output needs to begin at , then rise to , then drop back down to .
0 1 0
A linear model cannot implement such a function. To finish computing the value
of for each example, we apply the rectified linear transformation:
h




0 0
1 0
1 0
2 1



 . (6.10)
This transformation has changed the relationship between the examples. They no
longer lie on a single line. As shown in figure , they now lie in a space where a
6.1
linear model can solve the problem.
We finish by multiplying by the weight vector :
w




0
1
1
0



. (6.11)
176
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
The neural network has obtained the correct answer for every example in the batch.
In this example, we simply specified the solution, then showed that it obtained
zero error. In a real situation, there might be billions of model parameters and
billions of training examples, so one cannot simply guess the solution as we did
here. Instead, a gradient-based optimization algorithm can find parameters that
produce very little error. The solution we described to the XOR problem is at a
global minimum of the loss function, so gradient descent could converge to this
point. There are other equivalent solutions to the XOR problem that gradient
descent could also find. The convergence point of gradient descent depends on the
initial values of the parameters. In practice, gradient descent would usually not
find clean, easily understood, integer-valued solutions like the one we presented
here.
6.2 Gradient-Based Learning
Designing and training a neural network is not much different from training any
other machine learning model with gradient descent. In section , we described
5.10
how to build a machine learning algorithm by specifying an optimization procedure,
a cost function, and a model family.
The largest difference between the linear models we have seen so far and neural
networks is that the nonlinearity of a neural network causes most interesting loss
functions to become non-convex. This means that neural networks are usually
trained by using iterative, gradient-based optimizers that merely drive the cost
function to a very low value, rather than the linear equation solvers used to train
linear regression models or the convex optimization algorithms with global conver-
gence guarantees used to train logistic regression or SVMs. Convex optimization
converges starting from any initial parameters (in theory—in practice it is very
robust but can encounter numerical problems). Stochastic gradient descent applied
to non-convex loss functions has no such convergence guarantee, and is sensitive
to the values of the initial parameters. For feedforward neural networks, it is
important to initialize all weights to small random values. The biases may be
initialized to zero or to small positive values. The iterative gradient-based opti-
mization algorithms used to train feedforward networks and almost all other deep
models will be described in detail in chapter , with parameter initialization in
8
particular discussed in section . For the moment, it suffices to understand that
8.4
the training algorithm is almost always based on using the gradient to descend the
cost function in one way or another. The specific algorithms are improvements
and refinements on the ideas of gradient descent, introduced in section , and,
4.3
177
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
more specifically, are most often improvements of the stochastic gradient descent
algorithm, introduced in section .
5.9
We can of course, train models such as linear regression and support vector
machines with gradient descent too, and in fact this is common when the training
set is extremely large. From this point of view, training a neural network is not
much different from training any other model. Computing the gradient is slightly
more complicated for a neural network, but can still be done efficiently and exactly.
Section will describe how to obtain the gradient using the back-propagation
6.5
algorithm and modern generalizations of the back-propagation algorithm.
As with other machine learning models, to apply gradient-based learning we
must choose a cost function, and we must choose how to represent the output of
the model. We now revisit these design considerations with special emphasis on
the neural networks scenario.
6.2.1 Cost Functions
An important aspect of the design of a deep neural network is the choice of the
cost function. Fortunately, the cost functions for neural networks are more or less
the same as those for other parametric models, such as linear models.
In most cases, our parametric model defines a distribution p(y x
| ;θ ) and
we simply use the principle of maximum likelihood. This means we use the
cross-entropy between the training data and the model’s predictions as the cost
function.
Sometimes, we take a simpler approach, where rather than predicting a complete
probability distribution over y, we merely predict some statistic of y conditioned
on . Specialized loss functions allow us to train a predictor of these estimates.
x
The total cost function used to train a neural network will often combine one
of the primary cost functions described here with a regularization term. We have
already seen some simple examples of regularization applied to linear models in
section . The weight decay approach used for linear models is also directly
5.2.2
applicable to deep neural networks and is among the most popular regularization
strategies. More advanced regularization strategies for neural networks will be
described in chapter .
7
6.2.1.1 Learning Conditional Distributions with Maximum Likelihood
Most modern neural networks are trained using maximum likelihood. This means
that the cost function is simply the negative log-likelihood, equivalently described
178
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
as the cross-entropy between the training data and the model distribution. This
cost function is given by
J( ) =
θ −Ex y
, ∼p̂data
log pmodel( )
y x
| . (6.12)
The specific form of the cost function changes from model to model, depending
on the specific form of log pmodel. The expansion of the above equation typically
yields some terms that do not depend on the model parameters and may be dis-
carded. For example, as we saw in section , if
5.5.1 pmodel(y x
| ) = N (y ;f(x; θ), I ),
then we recover the mean squared error cost,
J θ
( ) =
1
2
Ex y
, ∼p̂data
|| − ||
y f( ; )
x θ 2
+ const, (6.13)
up to a scaling factor of 1
2 and a term that does not depend on . The discarded
θ
constant is based on the variance of the Gaussian distribution, which in this case
we chose not to parametrize. Previously, we saw that the equivalence between
maximum likelihood estimation with an output distribution and minimization of
mean squared error holds for a linear model, but in fact, the equivalence holds
regardless of the used to predict the mean of the Gaussian.
f( ; )
x θ
An advantage of this approach of deriving the cost function from maximum
likelihood is that it removes the burden of designing cost functions for each model.
Specifying a model p(y x
| ) automatically determines a cost function log p(y x
| ).
One recurring theme throughout neural network design is that the gradient of
the cost function must be large and predictable enough to serve as a good guide
for the learning algorithm. Functions that saturate (become very flat) undermine
this objective because they make the gradient become very small. In many cases
this happens because the activation functions used to produce the output of the
hidden units or the output units saturate. The negative log-likelihood helps to
avoid this problem for many models. Many output units involve an exp function
that can saturate when its argument is very negative. The log function in the
negative log-likelihood cost function undoes the exp of some output units. We will
discuss the interaction between the cost function and the choice of output unit in
section .
6.2.2
One unusual property of the cross-entropy cost used to perform maximum
likelihood estimation is that it usually does not have a minimum value when applied
to the models commonly used in practice. For discrete output variables, most
models are parametrized in such a way that they cannot represent a probability
of zero or one, but can come arbitrarily close to doing so. Logistic regression
is an example of such a model. For real-valued output variables, if the model
179
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
can control the density of the output distribution (for example, by learning the
variance parameter of a Gaussian output distribution) then it becomes possible
to assign extremely high density to the correct training set outputs, resulting in
cross-entropy approaching negative infinity. Regularization techniques described
in chapter provide several different ways of modifying the learning problem so
7
that the model cannot reap unlimited reward in this way.
6.2.1.2 Learning Conditional Statistics
Instead of learning a full probability distribution p(y x
| ; θ) we often want to learn
just one conditional statistic of given .
y x
For example, we may have a predictor f(x;θ) that we wish to predict the mean
of .
y
If we use a sufficiently powerful neural network, we can think of the neural
network as being able to represent any function f from a wide class of functions,
with this class being limited only by features such as continuity and boundedness
rather than by having a specific parametric form. From this point of view, we
can view the cost function as being a functional rather than just a function. A
functional is a mapping from functions to real numbers. We can thus think of
learning as choosing a function rather than merely choosing a set of parameters.
We can design our cost functional to have its minimum occur at some specific
function we desire. For example, we can design the cost functional to have its
minimum lie on the function that maps x to the expected value of y given x.
Solving an optimization problem with respect to a function requires a mathematical
tool called calculus of variations, described in section . It is not necessary
19.4.2
to understand calculus of variations to understand the content of this chapter. At
the moment, it is only necessary to understand that calculus of variations may be
used to derive the following two results.
Our first result derived using calculus of variations is that solving the optimiza-
tion problem
f∗
= arg min
f
Ex y
, ∼pdata || − ||
y f( )
x 2
(6.14)
yields
f∗
( ) =
x Ey∼pdata( )
y x
| [ ]
y , (6.15)
so long as this function lies within the class we optimize over. In other words, if we
could train on infinitely many samples from the true data generating distribution,
minimizing the mean squared error cost function gives a function that predicts the
mean of for each value of .
y x
180
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Different cost functions give different statistics. A second result derived using
calculus of variations is that
f∗
= arg min
f
Ex y
, ∼pdata || − ||
y f( )
x 1 (6.16)
yields a function that predicts the median value of y for each x, so long as such a
function may be described by the family of functions we optimize over. This cost
function is commonly called .
mean absolute error
Unfortunately, mean squared error and mean absolute error often lead to poor
results when used with gradient-based optimization. Some output units that
saturate produce very small gradients when combined with these cost functions.
This is one reason that the cross-entropy cost function is more popular than mean
squared error or mean absolute error, even when it is not necessary to estimate an
entire distribution .
p( )
y x
|
6.2.2 Output Units
The choice of cost function is tightly coupled with the choice of output unit. Most
of the time, we simply use the cross-entropy between the data distribution and the
model distribution. The choice of how to represent the output then determines
the form of the cross-entropy function.
Any kind of neural network unit that may be used as an output can also be
used as a hidden unit. Here, we focus on the use of these units as outputs of the
model, but in principle they can be used internally as well. We revisit these units
with additional detail about their use as hidden units in section .
6.3
Throughout this section, we suppose that the feedforward network provides a
set of hidden features defined by h = f(x;θ). The role of the output layer is then
to provide some additional transformation from the features to complete the task
that the network must perform.
6.2.2.1 Linear Units for Gaussian Output Distributions
One simple kind of output unit is an output unit based on an affine transformation
with no nonlinearity. These are often just called linear units.
Given features h, a layer of linear output units produces a vector ŷ = W
h+b.
Linear output layers are often used to produce the mean of a conditional
Gaussian distribution:
p( ) = ( ;
y x
| N y ˆ
y I
, ). (6.17)
181
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Maximizing the log-likelihood is then equivalent to minimizing the mean squared
error.
The maximum likelihood framework makes it straightforward to learn the
covariance of the Gaussian too, or to make the covariance of the Gaussian be a
function of the input. However, the covariance must be constrained to be a positive
definite matrix for all inputs. It is difficult to satisfy such constraints with a linear
output layer, so typically other output units are used to parametrize the covariance.
Approaches to modeling the covariance are described shortly, in section .
6.2.2.4
Because linear units do not saturate, they pose little difficulty for gradient-
based optimization algorithms and may be used with a wide variety of optimization
algorithms.
6.2.2.2 Sigmoid Units for Bernoulli Output Distributions
Many tasks require predicting the value of a binary variable y. Classification
problems with two classes can be cast in this form.
The maximum-likelihood approach is to define a Bernoulli distribution over y
conditioned on .
x
A Bernoulli distribution is defined by just a single number. The neural net
needs to predict only P(y = 1 | x). For this number to be a valid probability, it
must lie in the interval [0, 1].
Satisfying this constraint requires some careful design effort. Suppose we were
to use a linear unit, and threshold its value to obtain a valid probability:
P y
( = 1 ) = max
| x

0 min
,

1, w
h + b

. (6.18)
This would indeed define a valid conditional distribution, but we would not be able
to train it very effectively with gradient descent. Any time that wh +b strayed
outside the unit interval, the gradient of the output of the model with respect to
its parameters would be 0. A gradient of 0 is typically problematic because the
learning algorithm no longer has a guide for how to improve the corresponding
parameters.
Instead, it is better to use a different approach that ensures there is always a
strong gradient whenever the model has the wrong answer. This approach is based
on using sigmoid output units combined with maximum likelihood.
A sigmoid output unit is defined by
ŷ σ
=

w
h + b

(6.19)
182
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
where is the logistic sigmoid function described in section .
σ 3.10
We can think of the sigmoid output unit as having two components. First, it
uses a linear layer to compute z = wh +b. Next, it uses the sigmoid activation
function to convert into a probability.
z
We omit the dependence on x for the moment to discuss how to define a
probability distribution over y using the value z. The sigmoid can be motivated
by constructing an unnormalized probability distribution ˜
P(y), which does not
sum to 1. We can then divide by an appropriate constant to obtain a valid
probability distribution. If we begin with the assumption that the unnormalized log
probabilities are linear in y and z, we can exponentiate to obtain the unnormalized
probabilities. We then normalize to see that this yields a Bernoulli distribution
controlled by a sigmoidal transformation of :
z
log ˜
P y yz
( ) = (6.20)
˜
P y yz
( ) = exp( ) (6.21)
P y
( ) =
exp( )
yz
1
y=0 exp(y z)
(6.22)
P y σ y z .
( ) = ((2 − 1) ) (6.23)
Probability distributions based on exponentiation and normalization are common
throughout the statistical modeling literature. The z variable defining such a
distribution over binary variables is called a .
logit
This approach to predicting the probabilities in log-space is natural to use
with maximum likelihood learning. Because the cost function used with maximum
likelihood is − log P(y | x), the log in the cost function undoes the exp of the
sigmoid. Without this effect, the saturation of the sigmoid could prevent gradient-
based learning from making good progress. The loss function for maximum
likelihood learning of a Bernoulli parametrized by a sigmoid is
J P y
( ) = log
θ − ( | x) (6.24)
= log ((2 1) )
− σ y − z (6.25)
= ((1 2 ) )
ζ − y z . (6.26)
This derivation makes use of some properties from section . By rewriting
3.10
the loss in terms of the softplus function, we can see that it saturates only when
(1 − 2y)z is very negative. Saturation thus occurs only when the model already
has the right answer—when y = 1 and z is very positive, or y = 0 and z is very
negative. When z has the wrong sign, the argument to the softplus function,
183
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
(1 −2y)z, may be simplified to | |
z . As | |
z becomes large while z has the wrong sign,
the softplus function asymptotes toward simply returning its argument | |
z . The
derivative with respect to z asymptotes to sign(z), so, in the limit of extremely
incorrect z, the softplus function does not shrink the gradient at all. This property
is very useful because it means that gradient-based learning can act to quickly
correct a mistaken .
z
When we use other loss functions, such as mean squared error, the loss can
saturate anytime σ(z) saturates. The sigmoid activation function saturates to 0
when z becomes very negative and saturates to when
1 z becomes very positive.
The gradient can shrink too small to be useful for learning whenever this happens,
whether the model has the correct answer or the incorrect answer. For this reason,
maximum likelihood is almost always the preferred approach to training sigmoid
output units.
Analytically, the logarithm of the sigmoid is always defined and finite, because
the sigmoid returns values restricted to the open interval (0, 1), rather than using
the entire closed interval of valid probabilities [0,1]. In software implementations,
to avoid numerical problems, it is best to write the negative log-likelihood as a
function of z, rather than as a function of ŷ = σ(z ). If the sigmoid function
underflows to zero, then taking the logarithm of ŷ yields negative infinity.
6.2.2.3 Softmax Units for Multinoulli Output Distributions
Any time we wish to represent a probability distribution over a discrete variable
with n possible values, we may use the softmax function. This can be seen as a
generalization of the sigmoid function which was used to represent a probability
distribution over a binary variable.
Softmax functions are most often used as the output of a classifier, to represent
the probability distribution over n different classes. More rarely, softmax functions
can be used inside the model itself, if we wish the model to choose between one of
n different options for some internal variable.
In the case of binary variables, we wished to produce a single number
ŷ P y .
= ( = 1 )
| x (6.27)
Because this number needed to lie between and , and because we wanted the
0 1
logarithm of the number to be well-behaved for gradient-based optimization of
the log-likelihood, we chose to instead predict a number z = log P̃(y = 1 | x).
Exponentiating and normalizing gave us a Bernoulli distribution controlled by the
sigmoid function.
184
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
To generalize to the case of a discrete variable with n values, we now need
to produce a vector ŷ, with ˆ
yi = P(y = i | x). We require not only that each
element of ˆ
yi be between and , but also that the entire vector sums to so that
0 1 1
it represents a valid probability distribution. The same approach that worked for
the Bernoulli distribution generalizes to the multinoulli distribution. First, a linear
layer predicts unnormalized log probabilities:
z W
= 
h b
+ , (6.28)
where zi = log ˜
P(y = i | x). The softmax function can then exponentiate and
normalize to obtain the desired
z ŷ. Formally, the softmax function is given by
softmax( )
z i =
exp(zi)

j exp(zj)
. (6.29)
As with the logistic sigmoid, the use of the exp function works very well when
training the softmax to output a target value y using maximum log-likelihood. In
this case, we wish to maximize log P (y = i; z) = log softmax(z)i. Defining the
softmax in terms of exp is natural because the log in the log-likelihood can undo
the of the softmax:
exp
log softmax( )
z i = zi − log

j
exp(zj). (6.30)
The first term of equation shows that the input
6.30 zi always has a direct
contribution to the cost function. Because this term cannot saturate, we know
that learning can proceed, even if the contribution of zi to the second term of
equation becomes very small. When maximizing the log-likelihood, the first
6.30
term encourages zi to be pushed up, while the second term encourages all ofz to be
pushed down. To gain some intuition for the second term, log

j exp(zj ), observe
that this term can be roughly approximated by maxj zj. This approximation is
based on the idea that exp(zk) is insignificant for any zk that is noticeably less than
maxj zj. The intuition we can gain from this approximation is that the negative
log-likelihood cost function always strongly penalizes the most active incorrect
prediction. If the correct answer already has the largest input to the softmax, then
the −zi term and the log

j exp(zj) ≈ maxj zj = zi terms will roughly cancel.
This example will then contribute little to the overall training cost, which will be
dominated by other examples that are not yet correctly classified.
So far we have discussed only a single example. Overall, unregularized maximum
likelihood will drive the model to learn parameters that drive the softmax to predict
185
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
the fraction of counts of each outcome observed in the training set:
softmax( ( ; ))
z x θ i ≈
m
j=1 1y( )
j =i,x( )
j =x
m
j=1 1x( )
j =x
. (6.31)
Because maximum likelihood is a consistent estimator, this is guaranteed to happen
so long as the model family is capable of representing the training distribution. In
practice, limited model capacity and imperfect optimization will mean that the
model is only able to approximate these fractions.
Many objective functions other than the log-likelihood do not work as well
with the softmax function. Specifically, objective functions that do not use a log to
undo the exp of the softmax fail to learn when the argument to the exp becomes
very negative, causing the gradient to vanish. In particular, squared error is a
poor loss function for softmax units, and can fail to train the model to change its
output, even when the model makes highly confident incorrect predictions ( ,
Bridle
1990). To understand why these other loss functions can fail, we need to examine
the softmax function itself.
Like the sigmoid, the softmax activation can saturate. The sigmoid function has
a single output that saturates when its input is extremely negative or extremely
positive. In the case of the softmax, there are multiple output values. These
output values can saturate when the differences between input values become
extreme. When the softmax saturates, many cost functions based on the softmax
also saturate, unless they are able to invert the saturating activating function.
To see that the softmax function responds to the difference between its inputs,
observe that the softmax output is invariant to adding the same scalar to all of its
inputs:
softmax( ) = softmax( + )
z z c . (6.32)
Using this property, we can derive a numerically stable variant of the softmax:
softmax( ) = softmax( max
z z −
i
zi). (6.33)
The reformulated version allows us to evaluate softmax with only small numerical
errors even when z contains extremely large or extremely negative numbers. Ex-
amining the numerically stable variant, we see that the softmax function is driven
by the amount that its arguments deviate from maxi zi.
An output softmax(z)i saturates to when the corresponding input is maximal
1
(zi = maxi zi ) and zi is much greater than all of the other inputs. The output
softmax(z)i can also saturate to when
0 zi is not maximal and the maximum is
much greater. This is a generalization of the way that sigmoid units saturate, and
186
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
can cause similar difficulties for learning if the loss function is not designed to
compensate for it.
The argument z to the softmax function can be produced in two different ways.
The most common is simply to have an earlier layer of the neural network output
every element of z, as described above using the linear layer z = W h +b. While
straightforward, this approach actually overparametrizes the distribution. The
constraint that the n outputs must sum to means that only
1 n − 1 parameters are
necessary; the probability of the n-th value may be obtained by subtracting the
first n− 1 1
probabilities from . We can thus impose a requirement that one element
of z be fixed. For example, we can require that zn = 0. Indeed, this is exactly
what the sigmoid unit does. Defining P (y = 1 | x) = σ(z) is equivalent to defining
P(y = 1 | x) = softmax(z)1 with a two-dimensional z and z1 = 0. Both the n − 1
argument and the n argument approaches to the softmax can describe the same
set of probability distributions, but have different learning dynamics. In practice,
there is rarely much difference between using the overparametrized version or the
restricted version, and it is simpler to implement the overparametrized version.
From a neuroscientific point of view, it is interesting to think of the softmax as
a way to create a form of competition between the units that participate in it: the
softmax outputs always sum to 1 so an increase in the value of one unit necessarily
corresponds to a decrease in the value of others. This is analogous to the lateral
inhibition that is believed to exist between nearby neurons in the cortex. At the
extreme (when the difference between the maximal ai and the others is large in
magnitude) it becomes a form of winner-take-all (one of the outputs is nearly 1
and the others are nearly 0).
The name “softmax” can be somewhat confusing. The function is more closely
related to the arg max function than the max function. The term “soft” derives
from the fact that the softmax function is continuous and differentiable. The
arg max function, with its result represented as a one-hot vector, is not continuous
or differentiable. The softmax function thus provides a “softened” version of the
arg max. The corresponding soft version of the maximum function is softmax(z)z.
It would perhaps be better to call the softmax function “softargmax,” but the
current name is an entrenched convention.
6.2.2.4 Other Output Types
The linear, sigmoid, and softmax output units described above are the most
common. Neural networks can generalize to almost any kind of output layer that
we wish. The principle of maximum likelihood provides a guide for how to design
187
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
a good cost function for nearly any kind of output layer.
In general, if we define a conditional distribution p(y x
| ; θ), the principle of
maximum likelihood suggests we use as our cost function.
− |
log (
p y x θ
; )
In general, we can think of the neural network as representing a function f(x;θ).
The outputs of this function are not direct predictions of the value y. Instead,
f(x;θ) = ω provides the parameters for a distribution over y. Our loss function
can then be interpreted as .
− log ( ; ( ))
p y ω x
For example, we may wish to learn the variance of a conditional Gaussian for y,
given x. In the simple case, where the variance σ2
is a constant, there is a closed
form expression because the maximum likelihood estimator of variance is simply the
empirical mean of the squared difference between observationsy and their expected
value. A computationally more expensive approach that does not require writing
special-case code is to simply include the variance as one of the properties of the
distribution p(y | x) that is controlled by ω = f(x; θ). The negative log-likelihood
− log p(y;ω(x)) will then provide a cost function with the appropriate terms
necessary to make our optimization procedure incrementally learn the variance. In
the simple case where the standard deviation does not depend on the input, we
can make a new parameter in the network that is copied directly into ω. This new
parameter might be σ itself or could be a parameter v representing σ2 or it could
be a parameter β representing 1
σ2 , depending on how we choose to parametrize
the distribution. We may wish our model to predict a different amount of variance
in y for different values of x. This is called a heteroscedastic model. In the
heteroscedastic case, we simply make the specification of the variance be one of
the values output by f(x;θ). A typical way to do this is to formulate the Gaussian
distribution using precision, rather than variance, as described in equation .
3.22
In the multivariate case it is most common to use a diagonal precision matrix
diag (6.34)
( )
β .
This formulation works well with gradient descent because the formula for the
log-likelihood of the Gaussian distribution parametrized by β involves only mul-
tiplication by βi and addition of log βi. The gradient of multiplication, addition,
and logarithm operations is well-behaved. By comparison, if we parametrized the
output in terms of variance, we would need to use division. The division function
becomes arbitrarily steep near zero. While large gradients can help learning,
arbitrarily large gradients usually result in instability. If we parametrized the
output in terms of standard deviation, the log-likelihood would still involve division,
and would also involve squaring. The gradient through the squaring operation
can vanish near zero, making it difficult to learn parameters that are squared.
188
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Regardless of whether we use standard deviation, variance, or precision, we must
ensure that the covariance matrix of the Gaussian is positive definite. Because
the eigenvalues of the precision matrix are the reciprocals of the eigenvalues of
the covariance matrix, this is equivalent to ensuring that the precision matrix is
positive definite. If we use a diagonal matrix, or a scalar times the diagonal matrix,
then the only condition we need to enforce on the output of the model is positivity.
If we suppose that a is the raw activation of the model used to determine the
diagonal precision, we can use the softplus function to obtain a positive precision
vector: β = ζ(a). This same strategy applies equally if using variance or standard
deviation rather than precision or if using a scalar times identity rather than
diagonal matrix.
It is rare to learn a covariance or precision matrix with richer structure than
diagonal. If the covariance is full and conditional, then a parametrization must
be chosen that guarantees positive-definiteness of the predicted covariance matrix.
This can be achieved by writing Σ( ) = ( )
x B x B ( )
x , whereB is an unconstrained
square matrix. One practical issue if the matrix is full rank is that computing the
likelihood is expensive, with a d d
× matrix requiring O(d3 ) computation for the
determinant and inverse of Σ(x) (or equivalently, and more commonly done, its
eigendecomposition or that of ).
B x
( )
We often want to perform multimodal regression, that is, to predict real values
that come from a conditional distribution p(y x
| ) that can have several different
peaks in y space for the same value of x. In this case, a Gaussian mixture is
a natural representation for the output ( , ; , ).
Jacobs et al. 1991 Bishop 1994
Neural networks with Gaussian mixtures as their output are often called mixture
density networks. A Gaussian mixture output with n components is defined by
the conditional probability distribution
p( ) =
y x
|
n

i=1
p i
( =
c | N
x) ( ;
y µ( )
i
( )
x , Σ( )
i
( ))
x . (6.35)
The neural network must have three outputs: a vector defining p(c = i | x), a
matrix providing µ( )
i
(x) for all i, and a tensor providing Σ( )
i
( x) for all i. These
outputs must satisfy different constraints:
1. Mixture components p(c = i | x): these form a multinoulli distribution
over the n different components associated with latent variable1 c, and can
1
We consider c to be latent because we do not observe it in the data: given input x and target
y, it is not possible to know with certainty which Gaussian component was responsible for y, but
we can imagine that y was generated by picking one of them, and make that unobserved choice a
random variable.
189
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
typically be obtained by a softmax over an n-dimensional vector, to guarantee
that these outputs are positive and sum to 1.
2. Means µ( )
i (x): these indicate the center or mean associated with the i-th
Gaussian component, and are unconstrained (typically with no nonlinearity
at all for these output units). If y is a d-vector, then the network must output
an n d
× matrix containing all n of these d-dimensional vectors. Learning
these means with maximum likelihood is slightly more complicated than
learning the means of a distribution with only one output mode. We only
want to update the mean for the component that actually produced the
observation. In practice, we do not know which component produced each
observation. The expression for the negative log-likelihood naturally weights
each example’s contribution to the loss for each component by the probability
that the component produced the example.
3. Covariances Σ( )
i
(x): these specify the covariance matrix for each component
i. As when learning a single Gaussian component, we typically use a diagonal
matrix to avoid needing to compute determinants. As with learning the means
of the mixture, maximum likelihood is complicated by needing to assign
partial responsibility for each point to each mixture component. Gradient
descent will automatically follow the correct process if given the correct
specification of the negative log-likelihood under the mixture model.
It has been reported that gradient-based optimization of conditional Gaussian
mixtures (on the output of neural networks) can be unreliable, in part because one
gets divisions (by the variance) which can be numerically unstable (when some
variance gets to be small for a particular example, yielding very large gradients).
One solution is to clip gradients (see section ) while another is to scale
10.11.1
the gradients heuristically ( , ).
Murray and Larochelle 2014
Gaussian mixture outputs are particularly effective in generative models of
speech (Schuster 1999
, ) or movements of physical objects (Graves 2013
, ). The
mixture density strategy gives a way for the network to represent multiple output
modes and to control the variance of its output, which is crucial for obtaining
a high degree of quality in these real-valued domains. An example of a mixture
density network is shown in figure .
6.4
In general, we may wish to continue to model larger vectors y containing more
variables, and to impose richer and richer structures on these output variables. For
example, we may wish for our neural network to output a sequence of characters
that forms a sentence. In these cases, we may continue to use the principle
of maximum likelihood applied to our model p( y; ω(x)), but the model we use
190
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
x
y
Figure 6.4: Samples drawn from a neural network with a mixture density output layer.
The input x is sampled from a uniform distribution and the output y is sampled from
pmodel(y x
| ). The neural network is able to learn nonlinear mappings from the input to
the parameters of the output distribution. These parameters include the probabilities
governing which of three mixture components will generate the output as well as the
parameters for each mixture component. Each mixture component is Gaussian with
predicted mean and variance. All of these aspects of the output distribution are able to
vary with respect to the input , and to do so in nonlinear ways.
x
to describe y becomes complex enough to be beyond the scope of this chapter.
Chapter describes how to use recurrent neural networks to define such models
10
over sequences, and part describes advanced techniques for modeling arbitrary
III
probability distributions.
6.3 Hidden Units
So far we have focused our discussion on design choices for neural networks that
are common to most parametric machine learning models trained with gradient-
based optimization. Now we turn to an issue that is unique to feedforward neural
networks: how to choose the type of hidden unit to use in the hidden layers of the
model.
The design of hidden units is an extremely active area of research and does not
yet have many definitive guiding theoretical principles.
Rectified linear units are an excellent default choice of hidden unit. Many other
types of hidden units are available. It can be difficult to determine when to use
which kind (though rectified linear units are usually an acceptable choice). We
191
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
describe here some of the basic intuitions motivating each type of hidden units.
These intuitions can help decide when to try out each of these units. It is usually
impossible to predict in advance which will work best. The design process consists
of trial and error, intuiting that a kind of hidden unit may work well, and then
training a network with that kind of hidden unit and evaluating its performance
on a validation set.
Some of the hidden units included in this list are not actually differentiable at
all input points. For example, the rectified linear function g(z) = max{0, z} is not
differentiable at z = 0. This may seem like it invalidates g for use with a gradient-
based learning algorithm. In practice, gradient descent still performs well enough
for these models to be used for machine learning tasks. This is in part because
neural network training algorithms do not usually arrive at a local minimum of
the cost function, but instead merely reduce its value significantly, as shown in
figure . These ideas will be described further in chapter . Because we do not
4.3 8
expect training to actually reach a point where the gradient is 0, it is acceptable
for the minima of the cost function to correspond to points with undefined gradient.
Hidden units that are not differentiable are usually non-differentiable at only a
small number of points. In general, a function g(z) has a left derivative defined
by the slope of the function immediately to the left of z and a right derivative
defined by the slope of the function immediately to the right of z. A function
is differentiable at z only if both the left derivative and the right derivative are
defined and equal to each other. The functions used in the context of neural
networks usually have defined left derivatives and defined right derivatives. In the
case of g(z) = max{0, z}, the left derivative at z = 0 0
is and the right derivative
is . Software implementations of neural network training usually return one of
1
the one-sided derivatives rather than reporting that the derivative is undefined or
raising an error. This may be heuristically justified by observing that gradient-
based optimization on a digital computer is subject to numerical error anyway.
When a function is asked to evaluate g(0), it is very unlikely that the underlying
value truly was . Instead, it was likely to be some small value
0  that was rounded
to . In some contexts, more theoretically pleasing justifications are available, but
0
these usually do not apply to neural network training. The important point is that
in practice one can safely disregard the non-differentiability of the hidden unit
activation functions described below.
Unless indicated otherwise, most hidden units can be described as accepting
a vector of inputs x, computing an affine transformation z = W x + b, and
then applying an element-wise nonlinear function g(z). Most hidden units are
distinguished from each other only by the choice of the form of the activation
function .
g( )
z
192
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
6.3.1 Rectified Linear Units and Their Generalizations
Rectified linear units use the activation function .
g z , z
( ) = max 0
{ }
Rectified linear units are easy to optimize because they are so similar to linear
units. The only difference between a linear unit and a rectified linear unit is
that a rectified linear unit outputs zero across half its domain. This makes the
derivatives through a rectified linear unit remain large whenever the unit is active.
The gradients are not only large but also consistent. The second derivative of the
rectifying operation is almost everywhere, and the derivative of the rectifying
0
operation is everywhere that the unit is active. This means that the gradient
1
direction is far more useful for learning than it would be with activation functions
that introduce second-order effects.
Rectified linear units are typically used on top of an affine transformation:
h W
= (
g 
x b
+ ). (6.36)
When initializing the parameters of the affine transformation, it can be a good
practice to set all elements of b to a small, positive value, such as 0.1. This makes
it very likely that the rectified linear units will be initially active for most inputs
in the training set and allow the derivatives to pass through.
Several generalizations of rectified linear units exist. Most of these general-
izations perform comparably to rectified linear units and occasionally perform
better.
One drawback to rectified linear units is that they cannot learn via gradient-
based methods on examples for which their activation is zero. A variety of
generalizations of rectified linear units guarantee that they receive gradient every-
where.
Three generalizations of rectified linear units are based on using a non-zero
slope αi when zi < 0: hi = g(z α
, )i = max(0, zi) + αi min(0, zi ). Absolute value
rectification fixes αi = −1 to obtain g(z) = | |
z . It is used for object recognition
from images ( , ), where it makes sense to seek features that are
Jarrett et al. 2009
invariant under a polarity reversal of the input illumination. Other generalizations
of rectified linear units are more broadly applicable. A leaky ReLU ( ,
Maas et al.
2013) fixes αi to a small value like 0.01 while a parametric ReLU or PReLU
treats αi as a learnable parameter ( , ).
He et al. 2015
Maxout units ( , ) generalize rectified linear units
Goodfellow et al. 2013a
further. Instead of applying an element-wise function g(z ), maxout units divide z
into groups of k values. Each maxout unit then outputs the maximum element of
193
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
one of these groups:
g( )
z i = max
j∈G( )
i
zj (6.37)
where G( )
i is the set of indices into the inputs for group i, {(i − 1)k + 1, . . . , ik}.
This provides a way of learning a piecewise linear function that responds to multiple
directions in the input space.
x
A maxout unit can learn a piecewise linear, convex function with up to k pieces.
Maxout units can thus be seen as learning the activation function itself rather
than just the relationship between units. With large enough k, a maxout unit can
learn to approximate any convex function with arbitrary fidelity. In particular,
a maxout layer with two pieces can learn to implement the same function of the
input x as a traditional layer using the rectified linear activation function, absolute
value rectification function, or the leaky or parametric ReLU, or can learn to
implement a totally different function altogether. The maxout layer will of course
be parametrized differently from any of these other layer types, so the learning
dynamics will be different even in the cases where maxout learns to implement the
same function of as one of the other layer types.
x
Each maxout unit is now parametrized by k weight vectors instead of just one,
so maxout units typically need more regularization than rectified linear units. They
can work well without regularization if the training set is large and the number of
pieces per unit is kept low ( , ).
Cai et al. 2013
Maxout units have a few other benefits. In some cases, one can gain some sta-
tistical and computational advantages by requiring fewer parameters. Specifically,
if the features captured by n different linear filters can be summarized without
losing information by taking the max over each group of k features, then the next
layer can get by with times fewer weights.
k
Because each unit is driven by multiple filters, maxout units have some redun-
dancy that helps them to resist a phenomenon called catastrophic forgetting
in which neural networks forget how to perform tasks that they were trained on in
the past ( , ).
Goodfellow et al. 2014a
Rectified linear units and all of these generalizations of them are based on the
principle that models are easier to optimize if their behavior is closer to linear.
This same general principle of using linear behavior to obtain easier optimization
also applies in other contexts besides deep linear networks. Recurrent networks can
learn from sequences and produce a sequence of states and outputs. When training
them, one needs to propagate information through several time steps, which is much
easier when some linear computations (with some directional derivatives being of
magnitude near 1) are involved. One of the best-performing recurrent network
194
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
architectures, the LSTM, propagates information through time via summation—a
particular straightforward kind of such linear activation. This is discussed further
in section .
10.10
6.3.2 Logistic Sigmoid and Hyperbolic Tangent
Prior to the introduction of rectified linear units, most neural networks used the
logistic sigmoid activation function
g z σ z
( ) = ( ) (6.38)
or the hyperbolic tangent activation function
g z z .
( ) = tanh( ) (6.39)
These activation functions are closely related because .
tanh( ) = 2 (2 ) 1
z σ z −
We have already seen sigmoid units as output units, used to predict the
probability that a binary variable is . Unlike piecewise linear units, sigmoidal
1
units saturate across most of their domain—they saturate to a high value when
z is very positive, saturate to a low value when z is very negative, and are only
strongly sensitive to their input when z is near 0. The widespread saturation of
sigmoidal units can make gradient-based learning very difficult. For this reason,
their use as hidden units in feedforward networks is now discouraged. Their use
as output units is compatible with the use of gradient-based learning when an
appropriate cost function can undo the saturation of the sigmoid in the output
layer.
When a sigmoidal activation function must be used, the hyperbolic tangent
activation function typically performs better than the logistic sigmoid. It resembles
the identity function more closely, in the sense that tanh(0) = 0 while σ(0) = 1
2.
Because tanh is similar to the identity function near , training a deep neural
0
network ŷ = w tanh(U  tanh(Vx)) resembles training a linear model ŷ =
w UVx so long as the activations of the network can be kept small. This
makes training the network easier.
tanh
Sigmoidal activation functions are more common in settings other than feed-
forward networks. Recurrent networks, many probabilistic models, and some
autoencoders have additional requirements that rule out the use of piecewise
linear activation functions and make sigmoidal units more appealing despite the
drawbacks of saturation.
195
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
6.3.3 Other Hidden Units
Many other types of hidden units are possible, but are used less frequently.
In general, a wide variety of differentiable functions perform perfectly well.
Many unpublished activation functions perform just as well as the popular ones.
To provide a concrete example, the authors tested a feedforward network using
h = cos(Wx + b) on the MNIST dataset and obtained an error rate of less than
1%, which is competitive with results obtained using more conventional activation
functions. During research and development of new techniques, it is common
to test many different activation functions and find that several variations on
standard practice perform comparably. This means that usually new hidden unit
types are published only if they are clearly demonstrated to provide a significant
improvement. New hidden unit types that perform roughly comparably to known
types are so common as to be uninteresting.
It would be impractical to list all of the hidden unit types that have appeared
in the literature. We highlight a few especially useful and distinctive ones.
One possibility is to not have an activation g(z) at all. One can also think of
this as using the identity function as the activation function. We have already
seen that a linear unit can be useful as the output of a neural network. It may
also be used as a hidden unit. If every layer of the neural network consists of only
linear transformations, then the network as a whole will be linear. However, it
is acceptable for some layers of the neural network to be purely linear. Consider
a neural network layer with n inputs and p outputs, h = g(W
x + b). We may
replace this with two layers, with one layer using weight matrix U and the other
using weight matrix V . If the first layer has no activation function, then we have
essentially factored the weight matrix of the original layer based on W . The
factored approach is to compute h = g(V 
U
x + b). If U produces q outputs,
then U and V together contain only (n + p)q parameters, while W contains np
parameters. For small q, this can be a considerable saving in parameters. It
comes at the cost of constraining the linear transformation to be low-rank, but
these low-rank relationships are often sufficient. Linear hidden units thus offer an
effective way of reducing the number of parameters in a network.
Softmax units are another kind of unit that is usually used as an output (as
described in section ) but may sometimes be used as a hidden unit. Softmax
6.2.2.3
units naturally represent a probability distribution over a discrete variable with k
possible values, so they may be used as a kind of switch. These kinds of hidden
units are usually only used in more advanced architectures that explicitly learn to
manipulate memory, described in section .
10.12
196
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
A few other reasonably common hidden unit types include:
• Radial basis function or RBF unit: hi = exp

− 1
σ2
i
||W:,i − ||
x 2

. This
function becomes more active as x approaches a template W:,i. Because it
saturates to for most , it can be difficult to optimize.
0 x
• Softplus: g(a) = ζ(a) = log(1+ea). This is a smooth version of the rectifier,
introduced by ( ) for function approximation and by
Dugas et al. 2001 Nair
and Hinton 2010
( ) for the conditional distributions of undirected probabilistic
models. ( ) compared the softplus and rectifier and found
Glorot et al. 2011a
better results with the latter. The use of the softplus is generally discouraged.
The softplus demonstrates that the performance of hidden unit types can
be very counterintuitive—one might expect it to have an advantage over
the rectifier due to being differentiable everywhere or due to saturating less
completely, but empirically it does not.
• Hard tanh: this is shaped similarly to the tanh and the rectifier but unlike
the latter, it is bounded, g(a) = max(−1, min(1, a)). It was introduced
by ( ).
Collobert 2004
Hidden unit design remains an active area of research and many useful hidden
unit types remain to be discovered.
6.4 Architecture Design
Another key design consideration for neural networks is determining the architecture.
The word architecture refers to the overall structure of the network: how many
units it should have and how these units should be connected to each other.
Most neural networks are organized into groups of units called layers. Most
neural network architectures arrange these layers in a chain structure, with each
layer being a function of the layer that preceded it. In this structure, the first layer
is given by
h(1)
= g(1)

W(1)
x b
+ (1)

, (6.40)
the second layer is given by
h(2)
= g(2)

W(2)
h(1)
+ b(2)

, (6.41)
and so on.
197
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
In these chain-based architectures, the main architectural considerations are
to choose the depth of the network and the width of each layer. As we will see,
a network with even one hidden layer is sufficient to fit the training set. Deeper
networks often are able to use far fewer units per layer and far fewer parameters
and often generalize to the test set, but are also often harder to optimize. The
ideal network architecture for a task must be found via experimentation guided by
monitoring the validation set error.
6.4.1 Universal Approximation Properties and Depth
A linear model, mapping from features to outputs via matrix multiplication, can
by definition represent only linear functions. It has the advantage of being easy to
train because many loss functions result in convex optimization problems when
applied to linear models. Unfortunately, we often want to learn nonlinear functions.
At first glance, we might presume that learning a nonlinear function requires
designing a specialized model family for the kind of nonlinearity we want to learn.
Fortunately, feedforward networks with hidden layers provide a universal approxi-
mation framework. Specifically, the universal approximation theorem (Hornik
et al., ; , ) states that a feedforward network with a linear output
1989 Cybenko 1989
layer and at least one hidden layer with any “squashing” activation function (such
as the logistic sigmoid activation function) can approximate any Borel measurable
function from one finite-dimensional space to another with any desired non-zero
amount of error, provided that the network is given enough hidden units. The
derivatives of the feedforward network can also approximate the derivatives of the
function arbitrarily well ( , ). The concept of Borel measurability
Hornik et al. 1990
is beyond the scope of this book; for our purposes it suffices to say that any
continuous function on a closed and bounded subset of Rn is Borel measurable
and therefore may be approximated by a neural network. A neural network may
also approximate any function mapping from any finite dimensional discrete space
to another. While the original theorems were first stated in terms of units with
activation functions that saturate both for very negative and for very positive
arguments, universal approximation theorems have also been proved for a wider
class of activation functions, which includes the now commonly used rectified linear
unit ( , ).
Leshno et al. 1993
The universal approximation theorem means that regardless of what function
we are trying to learn, we know that a large MLP will be able to represent this
function. However, we are not guaranteed that the training algorithm will be able
to learn that function. Even if the MLP is able to represent the function, learning
can fail for two different reasons. First, the optimization algorithm used for training
198
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
may not be able to find the value of the parameters that corresponds to the desired
function. Second, the training algorithm might choose the wrong function due to
overfitting. Recall from section that the “no free lunch” theorem shows that
5.2.1
there is no universally superior machine learning algorithm. Feedforward networks
provide a universal system for representing functions, in the sense that, given a
function, there exists a feedforward network that approximates the function. There
is no universal procedure for examining a training set of specific examples and
choosing a function that will generalize to points not in the training set.
The universal approximation theorem says that there exists a network large
enough to achieve any degree of accuracy we desire, but the theorem does not
say how large this network will be. ( ) provides some bounds on the
Barron 1993
size of a single-layer network needed to approximate a broad class of functions.
Unfortunately, in the worse case, an exponential number of hidden units (possibly
with one hidden unit corresponding to each input configuration that needs to be
distinguished) may be required. This is easiest to see in the binary case: the
number of possible binary functions on vectors v ∈ {0, 1}n is 22n
and selecting
one such function requires 2n bits, which will in general require O(2n) degrees of
freedom.
In summary, a feedforward network with a single layer is sufficient to represent
any function, but the layer may be infeasibly large and may fail to learn and
generalize correctly. In many circumstances, using deeper models can reduce the
number of units required to represent the desired function and can reduce the
amount of generalization error.
There exist families of functions which can be approximated efficiently by an
architecture with depth greater than some valued, but which require a much larger
model if depth is restricted to be less than or equal to d. In many cases, the number
of hidden units required by the shallow model is exponential in n. Such results
were first proved for models that do not resemble the continuous, differentiable
neural networks used for machine learning, but have since been extended to these
models. The first results were for circuits of logic gates ( , ). Later
Håstad 1986
work extended these results to linear threshold units with non-negative weights
( , ; , ), and then to networks with
Håstad and Goldmann 1991 Hajnal et al. 1993
continuous-valued activations ( , ; , ). Many modern
Maass 1992 Maass et al. 1994
neural networks use rectified linear units. ( ) demonstrated
Leshno et al. 1993
that shallow networks with a broad family of non-polynomial activation functions,
including rectified linear units, have universal approximation properties, but these
results do not address the questions of depth or efficiency—they specify only that
a sufficiently wide rectifier network could represent any function. Montufar et al.
199
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
( ) showed that functions representable with a deep rectifier net can require
2014
an exponential number of hidden units with a shallow (one hidden layer) network.
More precisely, they showed that piecewise linear networks (which can be obtained
from rectifier nonlinearities or maxout units) can represent functions with a number
of regions that is exponential in the depth of the network. Figure illustrates how
6.5
a network with absolute value rectification creates mirror images of the function
computed on top of some hidden unit, with respect to the input of that hidden
unit. Each hidden unit specifies where to fold the input space in order to create
mirror responses (on both sides of the absolute value nonlinearity). By composing
these folding operations, we obtain an exponentially large number of piecewise
linear regions which can capture all kinds of regular (e.g., repeating) patterns.
Figure 6.5: An intuitive, geometric explanation of the exponential advantage of deeper
rectifier networks formally by ( ).
Montufar et al. 2014 (Left)An absolute value rectification
unit has the same output for every pair of mirror points in its input. The mirror axis
of symmetry is given by the hyperplane defined by the weights and bias of the unit. A
function computed on top of that unit (the green decision surface) will be a mirror image
of a simpler pattern across that axis of symmetry. The function can be obtained
(Center)
by folding the space around the axis of symmetry. Another repeating pattern can
(Right)
be folded on top of the first (by another downstream unit) to obtain another symmetry
(which is now repeated four times, with two hidden layers). Figure reproduced with
permission from ( ).
Montufar et al. 2014
More precisely, the main theorem in ( ) states that the
Montufar et al. 2014
number of linear regions carved out by a deep rectifier network with d inputs,
depth , and units per hidden layer, is
l n
O

n
d
d l
( −1)
nd

, (6.42)
i.e., exponential in the depth . In the case of maxout networks with filters per
l k
unit, the number of linear regions is
O

k( 1)+
l− d

. (6.43)
200
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Of course, there is no guarantee that the kinds of functions we want to learn in
applications of machine learning (and in particular for AI) share such a property.
We may also want to choose a deep model for statistical reasons. Any time
we choose a specific machine learning algorithm, we are implicitly stating some
set of prior beliefs we have about what kind of function the algorithm should
learn. Choosing a deep model encodes a very general belief that the function we
want to learn should involve composition of several simpler functions. This can be
interpreted from a representation learning point of view as saying that we believe
the learning problem consists of discovering a set of underlying factors of variation
that can in turn be described in terms of other, simpler underlying factors of
variation. Alternately, we can interpret the use of a deep architecture as expressing
a belief that the function we want to learn is a computer program consisting of
multiple steps, where each step makes use of the previous step’s output. These
intermediate outputs are not necessarily factors of variation, but can instead be
analogous to counters or pointers that the network uses to organize its internal
processing. Empirically, greater depth does seem to result in better generalization
for a wide variety of tasks ( , ; , ; , ;
Bengio et al. 2007 Erhan et al. 2009 Bengio 2009
Mesnil 2011 Ciresan 2012 Krizhevsky 2012 Sermanet
et al., ; et al., ; et al., ; et al.,
2013 Farabet 2013 Couprie 2013 Kahou 2013 Goodfellow
; et al., ; et al., ; et al., ;
et al. et al.
, ;
2014d Szegedy , ). See figure and figure for examples of
2014a 6.6 6.7
some of these empirical results. This suggests that using deep architectures does
indeed express a useful prior over the space of functions the model learns.
6.4.2 Other Architectural Considerations
So far we have described neural networks as being simple chains of layers, with the
main considerations being the depth of the network and the width of each layer.
In practice, neural networks show considerably more diversity.
Many neural network architectures have been developed for specific tasks.
Specialized architectures for computer vision called convolutional networks are
described in chapter . Feedforward networks may also be generalized to the
9
recurrent neural networks for sequence processing, described in chapter , which
10
have their own architectural considerations.
In general, the layers need not be connected in a chain, even though this is the
most common practice. Many architectures build a main chain but then add extra
architectural features to it, such as skip connections going from layer i to layer
i+ 2 or higher. These skip connections make it easier for the gradient to flow from
output layers to layers nearer the input.
201
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
3 4 5 6 7 8 9 10 11
92 0
.
92 5
.
93 0
.
93 5
.
94 0
.
94 5
.
95 0
.
95 5
.
96 0
.
96 5
.
Test
accuracy
(percent)
Figure 6.6: Empirical results showing that deeper networks generalize better when used
to transcribe multi-digit numbers from photographs of addresses. Data from Goodfellow
et al. ( ). The test set accuracy consistently increases with increasing depth. See
2014d
figure for a control experiment demonstrating that other increases to the model size
6.7
do not yield the same effect.
Another key consideration of architecture design is exactly how to connect a
pair of layers to each other. In the default neural network layer described by a linear
transformation via a matrix W , every input unit is connected to every output
unit. Many specialized networks in the chapters ahead have fewer connections, so
that each unit in the input layer is connected to only a small subset of units in
the output layer. These strategies for reducing the number of connections reduce
the number of parameters and the amount of computation required to evaluate
the network, but are often highly problem-dependent. For example, convolutional
networks, described in chapter , use specialized patterns of sparse connections
9
that are very effective for computer vision problems. In this chapter, it is difficult
to give much more specific advice concerning the architecture of a generic neural
network. Subsequent chapters develop the particular architectural strategies that
have been found to work well for different application domains.
202
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
0 0 0 2 0 4 0 6 0 8 1 0
. . . . . .
Number of parameters ×108
91
92
93
94
95
96
97
Test
accuracy
(percent)
3, convolutional
3, fully connected
11, convolutional
Figure 6.7: Deeper models tend to perform better. This is not merely because the model is
larger. This experiment from Goodfellow 2014d
et al. ( ) shows that increasing the number
of parameters in layers of convolutional networks without increasing their depth is not
nearly as effective at increasing test set performance. The legend indicates the depth of
network used to make each curve and whether the curve represents variation in the size of
the convolutional or the fully connected layers. We observe that shallow models in this
context overfit at around 20 million parameters while deep ones can benefit from having
over 60 million. This suggests that using a deep model expresses a useful preference over
the space of functions the model can learn. Specifically, it expresses a belief that the
function should consist of many simpler functions composed together. This could result
either in learning a representation that is composed in turn of simpler representations (e.g.,
corners defined in terms of edges) or in learning a program with sequentially dependent
steps (e.g., first locate a set of objects, then segment them from each other, then recognize
them).
203
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
6.5 Back-Propagation and Other Differentiation Algo-
rithms
When we use a feedforward neural network to accept an input x and produce an
output ŷ, information flows forward through the network. The inputs x provide
the initial information that then propagates up to the hidden units at each layer
and finally produces ŷ . This is called forward propagation. During training,
forward propagation can continue onward until it produces a scalar cost J(θ).
The back-propagation algorithm ( , ), often simply called
Rumelhart et al. 1986a
backprop, allows the information from the cost to then flow backwards through
the network, in order to compute the gradient.
Computing an analytical expression for the gradient is straightforward, but
numerically evaluating such an expression can be computationally expensive. The
back-propagation algorithm does so using a simple and inexpensive procedure.
The term back-propagation is often misunderstood as meaning the whole
learning algorithm for multi-layer neural networks. Actually, back-propagation
refers only to the method for computing the gradient, while another algorithm,
such as stochastic gradient descent, is used to perform learning using this gradient.
Furthermore, back-propagation is often misunderstood as being specific to multi-
layer neural networks, but in principle it can compute derivatives of any function
(for some functions, the correct response is to report that the derivative of the
function is undefined). Specifically, we will describe how to compute the gradient
∇xf(x y
, ) for an arbitrary function f , wherex is a set of variables whose derivatives
are desired, and y is an additional set of variables that are inputs to the function
but whose derivatives are not required. In learning algorithms, the gradient we most
often require is the gradient of the cost function with respect to the parameters,
∇θ J(θ). Many machine learning tasks involve computing other derivatives, either
as part of the learning process, or to analyze the learned model. The back-
propagation algorithm can be applied to these tasks as well, and is not restricted
to computing the gradient of the cost function with respect to the parameters. The
idea of computing derivatives by propagating information through a network is
very general, and can be used to compute values such as the Jacobian of a function
f with multiple outputs. We restrict our description here to the most commonly
used case where has a single output.
f
204
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
6.5.1 Computational Graphs
So far we have discussed neural networks with a relatively informal graph language.
To describe the back-propagation algorithm more precisely, it is helpful to have a
more precise language.
computational graph
Many ways of formalizing computation as graphs are possible.
Here, we use each node in the graph to indicate a variable. The variable may
be a scalar, vector, matrix, tensor, or even a variable of another type.
To formalize our graphs, we also need to introduce the idea of an operation.
An operation is a simple function of one or more variables. Our graph language
is accompanied by a set of allowable operations. Functions more complicated
than the operations in this set may be described by composing many operations
together.
Without loss of generality, we define an operation to return only a single
output variable. This does not lose generality because the output variable can have
multiple entries, such as a vector. Software implementations of back-propagation
usually support operations with multiple outputs, but we avoid this case in our
description because it introduces many extra details that are not important to
conceptual understanding.
If a variable y is computed by applying an operation to a variable x, then
we draw a directed edge from x to y. We sometimes annotate the output node
with the name of the operation applied, and other times omit this label when the
operation is clear from context.
Examples of computational graphs are shown in figure .
6.8
6.5.2 Chain Rule of Calculus
The chain rule of calculus (not to be confused with the chain rule of probability) is
used to compute the derivatives of functions formed by composing other functions
whose derivatives are known. Back-propagation is an algorithm that computes the
chain rule, with a specific order of operations that is highly efficient.
Let x be a real number, and let f and g both be functions mapping from a real
number to a real number. Suppose that y = g(x) and z = f(g(x)) = f(y). Then
the chain rule states that
dz
dx
=
dz
dy
dy
dx
. (6.44)
We can generalize this beyond the scalar case. Suppose that x ∈ Rm, y ∈ Rn,
205
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
z
z
x
x y
y
(a)
×
x
x w
w
(b)
u(1)
u(1)
dot
b
b
u(2)
u(2)
+
ŷ
ŷ
σ
(c)
X
X W
W
U (1)
U (1)
matmul
b
b
U (2)
U (2)
+
H
H
relu
x
x w
w
(d)
ŷ
ŷ
dot
λ
λ
u(1)
u(1)
sqr
u(2)
u(2)
sum
u(3)
u(3)
×
Figure 6.8: Examples of computational graphs. The graph using the
(a) × operation to
compute z = xy. The graph for the logistic regression prediction
(b) ŷ = σ

x
w + b

.
Some of the intermediate expressions do not have names in the algebraic expression
but need names in the graph. We simply name the i-th such variable u( )
i
. The
(c)
computational graph for the expression H = max{0, XW + b}, which computes a design
matrix of rectified linear unit activations H given a design matrix containing a minibatch
of inputs X . Examples a–c applied at most one operation to each variable, but it
(d)
is possible to apply more than one operation. Here we show a computation graph that
applies more than one operation to the weights w of a linear regression model. The
weights are used to make both the prediction ŷ and the weight decay penalty λ

i w2
i .
206
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
g maps from Rm
to Rn
, and f maps from Rn
to R. If y = g(x) and z = f(y), then
∂z
∂xi
=

j
∂z
∂yj
∂yj
∂xi
. (6.45)
In vector notation, this may be equivalently written as
∇xz =

∂y
∂x

∇y z, (6.46)
where ∂y
∂x is the Jacobian matrix of .
n m
× g
From this we see that the gradient of a variable x can be obtained by multiplying
a Jacobian matrix ∂y
∂x by a gradient ∇yz. The back-propagation algorithm consists
of performing such a Jacobian-gradient product for each operation in the graph.
Usually we do not apply the back-propagation algorithm merely to vectors,
but rather to tensors of arbitrary dimensionality. Conceptually, this is exactly the
same as back-propagation with vectors. The only difference is how the numbers
are arranged in a grid to form a tensor. We could imagine flattening each tensor
into a vector before we run back-propagation, computing a vector-valued gradient,
and then reshaping the gradient back into a tensor. In this rearranged view,
back-propagation is still just multiplying Jacobians by gradients.
To denote the gradient of a value z with respect to a tensor X, we write ∇Xz,
just as if X were a vector. The indices into X now have multiple coordinates—for
example, a 3-D tensor is indexed by three coordinates. We can abstract this away
by using a single variable i to represent the complete tuple of indices. For all
possible index tuples i, (∇Xz)i gives ∂z
∂Xi
. This is exactly the same as how for all
possible integer indices i into a vector, (∇x z)i gives ∂z
∂xi
. Using this notation, we
can write the chain rule as it applies to tensors. If and , then
Y X
= (
g ) z f
= ( )
Y
∇X z =

j
(∇XYj )
∂z
∂Yj
. (6.47)
6.5.3 Recursively Applying the Chain Rule to Obtain Backprop
Using the chain rule, it is straightforward to write down an algebraic expression for
the gradient of a scalar with respect to any node in the computational graph that
produced that scalar. However, actually evaluating that expression in a computer
introduces some extra considerations.
Specifically, many subexpressions may be repeated several times within the
overall expression for the gradient. Any procedure that computes the gradient
207
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
will need to choose whether to store these subexpressions or to recompute them
several times. An example of how these repeated subexpressions arise is given in
figure . In some cases, computing the same subexpression twice would simply
6.9
be wasteful. For complicated graphs, there can be exponentially many of these
wasted computations, making a naive implementation of the chain rule infeasible.
In other cases, computing the same subexpression twice could be a valid way to
reduce memory consumption at the cost of higher runtime.
We first begin by a version of the back-propagation algorithm that specifies the
actual gradient computation directly (algorithm along with algorithm for the
6.2 6.1
associated forward computation), in the order it will actually be done and according
to the recursive application of chain rule. One could either directly perform these
computations or view the description of the algorithm as a symbolic specification
of the computational graph for computing the back-propagation. However, this
formulation does not make explicit the manipulation and the construction of the
symbolic graph that performs the gradient computation. Such a formulation is
presented below in section , with algorithm , where we also generalize to
6.5.6 6.5
nodes that contain arbitrary tensors.
First consider a computational graph describing how to compute a single scalar
u( )
n
(say the loss on a training example). This scalar is the quantity whose
gradient we want to obtain, with respect to the ni input nodes u(1) to u(ni). In
other words we wish to compute ∂u( )
n
∂u( )
i for all i ∈ {1,2, . . . , ni}. In the application
of back-propagation to computing gradients for gradient descent over parameters,
u( )
n
will be the cost associated with an example or a minibatch, while u(1)
to u(ni)
correspond to the parameters of the model.
We will assume that the nodes of the graph have been ordered in such a way
that we can compute their output one after the other, starting at u(ni +1) and
going up to u( )
n . As defined in algorithm , each node
6.1 u( )
i is associated with an
operation f( )
i and is computed by evaluating the function
u( )
i
= (
f A( )
i
) (6.48)
where A( )
i
is the set of all nodes that are parents of u( )
i
.
That algorithm specifies the forward propagation computation, which we could
put in a graph G. In order to perform back-propagation, we can construct a
computational graph that depends onG and adds to it an extra set of nodes. These
form a subgraph B with one node per node of G. Computation in B proceeds in
exactly the reverse of the order of computation in G, and each node of B computes
the derivative ∂u( )
n
∂u( )
i associated with the forward graph node u( )
i . This is done
208
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Algorithm 6.1 A procedure that performs the computations mapping ni inputs
u(1)
to u(ni)
to an output u( )
n
. This defines a computational graph where each node
computes numerical value u( )
i by applying a function f ( )
i to the set of arguments
A( )
i that comprises the values of previous nodes u( )
j , j < i, with j Pa
∈ (u( )
i ). The
input to the computational graph is the vector x, and is set into the first ni nodes
u(1) to u(ni ) . The output of the computational graph is read off the last (output)
node u( )
n .
for i , . . . , n
= 1 i do
u( )
i ← xi
end for
for i n
= i + 1, . . . , n do
A( )
i ← {u( )
j | ∈
j Pa u
( ( )
i )}
u( )
i ← f ( )
i (A( )
i )
end for
return u( )
n
using the chain rule with respect to scalar output u( )
n
:
∂u( )
n
∂u( )
j
=

i j P a u
: ∈ ( ( )
i )
∂u( )
n
∂u( )
i
∂u( )
i
∂u( )
j
(6.49)
as specified by algorithm . The subgraph
6.2 B contains exactly one edge for each
edge from node u( )
j to node u( )
i of G. The edge from u( )
j to u( )
i is associated with
the computation of ∂u( )
i
∂u( )
j . In addition, a dot product is performed for each node,
between the gradient already computed with respect to nodes u( )
i
that are children
of u( )
j and the vector containing the partial derivatives ∂u( )
i
∂u( )
j for the same children
nodes u( )
i . To summarize, the amount of computation required for performing
the back-propagation scales linearly with the number of edges in G, where the
computation for each edge corresponds to computing a partial derivative (of one
node with respect to one of its parents) as well as performing one multiplication
and one addition. Below, we generalize this analysis to tensor-valued nodes, which
is just a way to group multiple scalar values in the same node and enable more
efficient implementations.
The back-propagation algorithm is designed to reduce the number of common
subexpressions without regard to memory. Specifically, it performs on the order
of one Jacobian product per node in the graph. This can be seen from the fact
that backprop (algorithm ) visits each edge from node
6.2 u( )
j to node u( )
i of
the graph exactly once in order to obtain the associated partial derivative ∂u( )
i
∂u( )
j .
209
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Algorithm 6.2 Simplified version of the back-propagation algorithm for computing
the derivatives of u( )
n
with respect to the variables in the graph. This example is
intended to further understanding by showing a simplified case where all variables
are scalars, and we wish to compute the derivatives with respect to u(1), . . . , u(ni ).
This simplified version computes the derivatives of all nodes in the graph. The
computational cost of this algorithm is proportional to the number of edges in
the graph, assuming that the partial derivative associated with each edge requires
a constant time. This is of the same order as the number of computations for
the forward propagation. Each ∂u( )
i
∂u( )
j is a function of the parents u( )
j
of u( )
i
, thus
linking the nodes of the forward graph to those added for the back-propagation
graph.
Run forward propagation (algorithm for this example) to obtain the activa-
6.1
tions of the network
Initialize grad_table, a data structure that will store the derivatives that have
been computed. The entry grad table
_ [u( )
i
] will store the computed value of
∂u( )
n
∂u( )
i .
grad table
_ [u( )
n ] 1
←
for do
j n
= − 1 down to 1
The next line computes ∂u( )
n
∂u( )
j =

i j P a u
: ∈ ( ( )
i )
∂u( )
n
∂u( )
i
∂u ( )
i
∂u( )
j using stored values:
grad table
_ [u( )
j ] ←

i j P a u
: ∈ ( ( )
i ) grad table
_ [u( )
i ]∂u( )
i
∂u( )
j
end for
return {grad table
_ [u( )
i ] = 1
| i , . . . , ni}
Back-propagation thus avoids the exponential explosion in repeated subexpressions.
However, other algorithms may be able to avoid more subexpressions by performing
simplifications on the computational graph, or may be able to conserve memory by
recomputing rather than storing some subexpressions. We will revisit these ideas
after describing the back-propagation algorithm itself.
6.5.4 Back-Propagation Computation in Fully-Connected MLP
To clarify the above definition of the back-propagation computation, let us consider
the specific graph associated with a fully-connected multi-layer MLP.
Algorithm first shows the forward propagation, which maps parameters to
6.3
the supervised loss L(ŷ y
, ) associated with a single (input,target) training example
( )
x y
, , with ŷ the output of the neural network when is provided in input.
x
Algorithm then shows the corresponding computation to be done for
6.4
210
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
z
z
x
x
y
y
w
w
f
f
f
Figure 6.9: A computational graph that results in repeated subexpressions when computing
the gradient. Let w ∈ R be the input to the graph. We use the same function f : R R
→
as the operation that we apply at every step of a chain: x = f(w), y = f(x), z = f(y).
To compute ∂z
∂w , we apply equation and obtain:
6.44
∂z
∂w
(6.50)
=
∂z
∂y
∂y
∂x
∂x
∂w
(6.51)
=f
( )
y f
( )
x f
( )
w (6.52)
=f
( ( ( )))
f f w f 
( ( ))
f w f
( )
w (6.53)
Equation suggests an implementation in which we compute the value of
6.52 f (w) only
once and store it in the variable x. This is the approach taken by the back-propagation
algorithm. An alternative approach is suggested by equation , where the subexpression
6.53
f(w) appears more than once. In the alternative approach,f(w) is recomputed each time
it is needed. When the memory required to store the value of these expressions is low, the
back-propagation approach of equation is clearly preferable because of its reduced
6.52
runtime. However, equation is also a valid implementation of the chain rule, and is
6.53
useful when memory is limited.
211
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
applying the back-propagation algorithm to this graph.
Algorithms and are demonstrations that are chosen to be simple and
6.3 6.4
straightforward to understand. However, they are specialized to one specific
problem.
Modern software implementations are based on the generalized form of back-
propagation described in section below, which can accommodate any compu-
6.5.6
tational graph by explicitly manipulating a data structure for representing symbolic
computation.
Algorithm 6.3 Forward propagation through a typical deep neural network and
the computation of the cost function. The loss L(ŷ y
, ) depends on the output
ŷ and on the target y (see section for examples of loss functions). To
6.2.1.1
obtain the total cost J, the loss may be added to a regularizer Ω(θ), where θ
contains all the parameters (weights and biases). Algorithm shows how to
6.4
compute gradients of J with respect to parameters W and b. For simplicity, this
demonstration uses only a single input example x. Practical applications should
use a minibatch. See section for a more realistic demonstration.
6.5.7
Require: Network depth, l
Require: W ( )
i , i , . . . , l ,
∈ {1 } the weight matrices of the model
Require: b( )
i , i , . . . , l ,
∈ {1 } the bias parameters of the model
Require: x, the input to process
Require: y, the target output
h(0) = x
for do
k , . . . , l
= 1
a( )
k = b( )
k + W( )
k h( 1)
k−
h( )
k
= (
f a( )
k
)
end for
ŷ h
= ( )
l
J L
= (ŷ y
, ) + Ω( )
λ θ
6.5.5 Symbol-to-Symbol Derivatives
Algebraic expressions and computational graphs both operate on symbols, or
variables that do not have specific values. These algebraic and graph-based
representations are called symbolic representations. When we actually use or
train a neural network, we must assign specific values to these symbols. We
replace a symbolic input to the network x with a specific numeric value, such as
[1 2 3 765 1 8]
. , . , − . 
.
212
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Algorithm 6.4 Backward computation for the deep neural network of algo-
rithm , which uses in addition to the input
6.3 x a target y. This computation
yields the gradients on the activations a( )
k for each layer k, starting from the
output layer and going backwards to the first hidden layer. From these gradients,
which can be interpreted as an indication of how each layer’s output should change
to reduce error, one can obtain the gradient on the parameters of each layer. The
gradients on weights and biases can be immediately used as part of a stochas-
tic gradient update (performing the update right after the gradients have been
computed) or used with other gradient-based optimization methods.
After the forward computation, compute the gradient on the output layer:
g ← ∇ŷJ = ∇ŷL(ŷ y
, )
for do
k l, l , . . . ,
= − 1 1
Convert the gradient on the layer’s output into a gradient into the pre-
nonlinearity activation (element-wise multiplication if is element-wise):
f
g ← ∇a( )
k J f
= g  
(a( )
k
)
Compute gradients on weights and biases (including the regularization term,
where needed):
∇b( )
k J λ
= +
g ∇b( )
k Ω( )
θ
∇W ( )
k J = g h( 1)
k− 
+ λ∇W ( )
k Ω( )
θ
Propagate the gradients w.r.t. the next lower-level hidden layer’s activations:
g ← ∇h( 1)
k− J = W( )
k  g
end for
213
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
z
z
x
x
y
y
w
w
f
f
f
z
z
x
x
y
y
w
w
f
f
f
dz
dy
dz
dy
f 
dy
dx
dy
dx
f 
dz
dx
dz
dx
×
dx
dw
dx
dw
f 
dz
dw
dz
dw
×
Figure 6.10: An example of the symbol-to-symbol approach to computing derivatives. In
this approach, the back-propagation algorithm does not need to ever access any actual
specific numeric values. Instead, it adds nodes to a computational graph describing how
to compute these derivatives. A generic graph evaluation engine can later compute the
derivatives for any specific numeric values. (Left)In this example, we begin with a graph
representing z = f (f(f (w))). We run the back-propagation algorithm, instructing
(Right)
it to construct the graph for the expression corresponding to dz
dw . In this example, we do
not explain how the back-propagation algorithm works. The purpose is only to illustrate
what the desired result is: a computational graph with a symbolic description of the
derivative.
Some approaches to back-propagation take a computational graph and a set
of numerical values for the inputs to the graph, then return a set of numerical
values describing the gradient at those input values. We call this approach “symbol-
to-number” differentiation. This is the approach used by libraries such as Torch
( , ) and Caffe ( , ).
Collobert et al. 2011b Jia 2013
Another approach is to take a computational graph and add additional nodes
to the graph that provide a symbolic description of the desired derivatives. This
is the approach taken by Theano ( , ; , )
Bergstra et al. 2010 Bastien et al. 2012
and TensorFlow ( , ). An example of how this approach works
Abadi et al. 2015
is illustrated in figure . The primary advantage of this approach is that
6.10
the derivatives are described in the same language as the original expression.
Because the derivatives are just another computational graph, it is possible to run
back-propagation again, differentiating the derivatives in order to obtain higher
derivatives. Computation of higher-order derivatives is described in section .
6.5.10
We will use the latter approach and describe the back-propagation algorithm in
214
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
terms of constructing a computational graph for the derivatives. Any subset of the
graph may then be evaluated using specific numerical values at a later time. This
allows us to avoid specifying exactly when each operation should be computed.
Instead, a generic graph evaluation engine can evaluate every node as soon as its
parents’ values are available.
The description of the symbol-to-symbol based approach subsumes the symbol-
to-number approach. The symbol-to-number approach can be understood as
performing exactly the same computations as are done in the graph built by the
symbol-to-symbol approach. The key difference is that the symbol-to-number
approach does not expose the graph.
6.5.6 General Back-Propagation
The back-propagation algorithm is very simple. To compute the gradient of some
scalar z with respect to one of its ancestors x in the graph, we begin by observing
that the gradient with respect to z is given by dz
dz = 1. We can then compute
the gradient with respect to each parent of z in the graph by multiplying the
current gradient by the Jacobian of the operation that produced z. We continue
multiplying by Jacobians traveling backwards through the graph in this way until
we reach x. For any node that may be reached by going backwards from z through
two or more paths, we simply sum the gradients arriving from different paths at
that node.
More formally, each node in the graph G corresponds to a variable. To achieve
maximum generality, we describe this variable as being a tensor V. Tensor can
in general have any number of dimensions. They subsume scalars, vectors, and
matrices.
We assume that each variable is associated with the following subroutines:
V
• get operation
_ (V): This returns the operation that computes V, repre-
sented by the edges coming into V in the computational graph. For example,
there may be a Python or C++ class representing the matrix multiplication
operation, and the get_operation function. Suppose we have a variable that
is created by matrix multiplication, C = AB. Then get operation
_ (V)
returns a pointer to an instance of the corresponding C++ class.
• get consumers
_ (V, G): This returns the list of variables that are children of
V in the computational graph .
G
• G
get inputs
_ (V, ): This returns the list of variables that are parents of V
in the computational graph .
G
215
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Each operation op is also associated with a bprop operation. This bprop
operation can compute a Jacobian-vector product as described by equation .
6.47
This is how the back-propagation algorithm is able to achieve great generality.
Each operation is responsible for knowing how to back-propagate through the
edges in the graph that it participates in. For example, we might use a matrix
multiplication operation to create a variable C = AB. Suppose that the gradient
of a scalar z with respect to C is given by G. The matrix multiplication operation
is responsible for defining two back-propagation rules, one for each of its input
arguments. If we call the bprop method to request the gradient with respect to
A given that the gradient on the output is G, then the bprop method of the
matrix multiplication operation must state that the gradient with respect to A
is given by GB. Likewise, if we call the bprop method to request the gradient
with respect to B, then the matrix operation is responsible for implementing the
bprop method and specifying that the desired gradient is given by A
G. The
back-propagation algorithm itself does not need to know any differentiation rules. It
only needs to call each operation’s bprop rules with the right arguments. Formally,
op bprop inputs
. ( , ,
X G) must return

i
(∇ Xop f inputs
. ( )i) Gi, (6.54)
which is just an implementation of the chain rule as expressed in equation .
6.47
Here, inputs is a list of inputs that are supplied to the operation, op.f is the
mathematical function that the operation implements,X is the input whose gradient
we wish to compute, and is the gradient on the output of the operation.
G
The op.bprop method should always pretend that all of its inputs are distinct
from each other, even if they are not. For example, if the mul operator is passed
two copies of x to compute x2, the op.bprop method should still return x as the
derivative with respect to both inputs. The back-propagation algorithm will later
add both of these arguments together to obtain 2x, which is the correct total
derivative on .
x
Software implementations of back-propagation usually provide both the opera-
tions and their bprop methods, so that users of deep learning software libraries are
able to back-propagate through graphs built using common operations like matrix
multiplication, exponents, logarithms, and so on. Software engineers who build a
new implementation of back-propagation or advanced users who need to add their
own operation to an existing library must usually derive the op.bprop method for
any new operations manually.
The back-propagation algorithm is formally described in algorithm .
6.5
216
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Algorithm 6.5 The outermost skeleton of the back-propagation algorithm. This
portion does simple setup and cleanup work. Most of the important work happens
in the subroutine of algorithm
build_grad 6.6
.
Require: T, the target set of variables whose gradients must be computed.
Require: G, the computational graph
Require: z, the variable to be differentiated
Let G
be G pruned to contain only nodes that are ancestors of z and descendents
of nodes in .
T
Initialize , a data structure associating tensors to their gradients
grad_table
grad table
_ [ ] 1
z ←
for do
V in T
build grad
_ (V, ,
G G , grad table
_ )
end for
Return restricted to
grad_table T
In section , we explained that back-propagation was developed in order to
6.5.2
avoid computing the same subexpression in the chain rule multiple times. The naive
algorithm could have exponential runtime due to these repeated subexpressions.
Now that we have specified the back-propagation algorithm, we can understand its
computational cost. If we assume that each operation evaluation has roughly the
same cost, then we may analyze the computational cost in terms of the number
of operations executed. Keep in mind here that we refer to an operation as the
fundamental unit of our computational graph, which might actually consist of very
many arithmetic operations (for example, we might have a graph that treats matrix
multiplication as a single operation). Computing a gradient in a graph with n nodes
will never execute more than O(n2) operations or store the output of more than
O(n2) operations. Here we are counting operations in the computational graph, not
individual operations executed by the underlying hardware, so it is important to
remember that the runtime of each operation may be highly variable. For example,
multiplying two matrices that each contain millions of entries might correspond to
a single operation in the graph. We can see that computing the gradient requires as
most O(n2
) operations because the forward propagation stage will at worst execute
all n nodes in the original graph (depending on which values we want to compute,
we may not need to execute the entire graph). The back-propagation algorithm
adds one Jacobian-vector product, which should be expressed with O(1) nodes, per
edge in the original graph. Because the computational graph is a directed acyclic
graph it has at most O(n2 ) edges. For the kinds of graphs that are commonly used
in practice, the situation is even better. Most neural network cost functions are
217
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Algorithm 6.6 The inner loop subroutine build grad
_ (V, ,
G G, grad table
_ ) of
the back-propagation algorithm, called by the back-propagation algorithm defined
in algorithm .
6.5
Require: V, the variable whose gradient should be added to and .
G grad_table
Require: G, the graph to modify.
Require: G
, the restriction of to nodes that participate in the gradient.
G
Require: grad_table, a data structure mapping nodes to their gradients
if then
V is in grad_table
Return _
grad table[ ]
V
end if
i ← 1
for C V
in _
get consumers( , G) do
op get operation
← _ ( )
C
D C
← build grad
_ ( , ,
G G, grad table
_ )
G( )
i
← G
op bprop get inputs
. ( _ (C, ) )
, ,
V D
i i
← + 1
end for
G ←

i G( )
i
grad table
_ [ ] =
V G
Insert and the operations creating it into
G G
Return G
roughly chain-structured, causing back-propagation to have O(n) cost. This is far
better than the naive approach, which might need to execute exponentially many
nodes. This potentially exponential cost can be seen by expanding and rewriting
the recursive chain rule (equation ) non-recursively:
6.49
∂u( )
n
∂u( )
j
=

path (u(π1),u(π
2),...,u(πt)),
from π1= to
j πt=n
t

k=2
∂u(πk)
∂u(πk−1 )
. (6.55)
Since the number of paths from node j to node n can grow exponentially in the
length of these paths, the number of terms in the above sum, which is the number
of such paths, can grow exponentially with the depth of the forward propagation
graph. This large cost would be incurred because the same computation for
∂u( )
i
∂u( )
j would be redone many times. To avoid such recomputation, we can think
of back-propagation as a table-filling algorithm that takes advantage of storing
intermediate results ∂u( )
n
∂u( )
i . Each node in the graph has a corresponding slot in a
table to store the gradient for that node. By filling in these table entries in order,
218
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
back-propagation avoids repeating many common subexpressions. This table-filling
strategy is sometimes called .
dynamic programming
6.5.7 Example: Back-Propagation for MLP Training
As an example, we walk through the back-propagation algorithm as it is used to
train a multilayer perceptron.
Here we develop a very simple multilayer perception with a single hidden
layer. To train this model, we will use minibatch stochastic gradient descent.
The back-propagation algorithm is used to compute the gradient of the cost on a
single minibatch. Specifically, we use a minibatch of examples from the training
set formatted as a design matrix X and a vector of associated class labels y.
The network computes a layer of hidden features H = max{0, XW(1)}. To
simplify the presentation we do not use biases in this model. We assume that our
graph language includes a relu operation that can compute max{0, Z} element-
wise. The predictions of the unnormalized log probabilities over classes are then
given by HW (2). We assume that our graph language includes a cross_entropy
operation that computes the cross-entropy between the targets y and the probability
distribution defined by these unnormalized log probabilities. The resulting cross-
entropy defines the cost JMLE. Minimizing this cross-entropy performs maximum
likelihood estimation of the classifier. However, to make this example more realistic,
we also include a regularization term. The total cost
J J
= MLE + λ



i,j

W
(1)
i,j
2
+

i,j

W
(2)
i,j
2

 (6.56)
consists of the cross-entropy and a weight decay term with coefficient λ. The
computational graph is illustrated in figure .
6.11
The computational graph for the gradient of this example is large enough that
it would be tedious to draw or to read. This demonstrates one of the benefits
of the back-propagation algorithm, which is that it can automatically generate
gradients that would be straightforward but tedious for a software engineer to
derive manually.
We can roughly trace out the behavior of the back-propagation algorithm
by looking at the forward propagation graph in figure . To train, we wish
6.11
to compute both ∇W (1) J and ∇W (2) J. There are two different paths leading
backward from J to the weights: one through the cross-entropy cost, and one
through the weight decay cost. The weight decay cost is relatively simple; it will
always contribute 2λW( )
i
to the gradient on W( )
i
.
219
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
X
X W (1)
W (1)
U (1)
U (1)
matmul
H
H
relu
U (3)
U (3)
sqr
u(4)
u(4)
sum
λ
λ
u(7)
u(7)
W (2)
W (2)
U (2)
U (2)
matmul
y
y
JMLE
JMLE
cross_entropy
U (5)
U (5)
sqr
u(6)
u(6)
sum
u(8)
u(8)
J
J
+
×
+
Figure 6.11: The computational graph used to compute the cost used to train our example
of a single-layer MLP using the cross-entropy loss and weight decay.
The other path through the cross-entropy cost is slightly more complicated.
Let G be the gradient on the unnormalized log probabilities U(2) provided by
the cross_entropy operation. The back-propagation algorithm now needs to
explore two different branches. On the shorter branch, it adds H 
G to the
gradient on W(2)
, using the back-propagation rule for the second argument to
the matrix multiplication operation. The other branch corresponds to the longer
chain descending further along the network. First, the back-propagation algorithm
computes ∇HJ = GW(2)
using the back-propagation rule for the first argument
to the matrix multiplication operation. Next, the relu operation uses its back-
propagation rule to zero out components of the gradient corresponding to entries
of U(1) that were less than . Let the result be called
0 G . The last step of the
back-propagation algorithm is to use the back-propagation rule for the second
argument of the operation to add
matmul XG to the gradient on W(1).
After these gradients have been computed, it is the responsibility of the gradient
descent algorithm, or another optimization algorithm, to use these gradients to
update the parameters.
For the MLP, the computational cost is dominated by the cost of matrix
multiplication. During the forward propagation stage, we multiply by each weight
220
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
matrix, resulting in O(w) multiply-adds, where w is the number of weights. During
the backward propagation stage, we multiply by the transpose of each weight
matrix, which has the same computational cost. The main memory cost of the
algorithm is that we need to store the input to the nonlinearity of the hidden layer.
This value is stored from the time it is computed until the backward pass has
returned to the same point. The memory cost is thus O(mnh), where m is the
number of examples in the minibatch and nh is the number of hidden units.
6.5.8 Complications
Our description of the back-propagation algorithm here is simpler than the imple-
mentations actually used in practice.
As noted above, we have restricted the definition of an operation to be a
function that returns a single tensor. Most software implementations need to
support operations that can return more than one tensor. For example, if we wish
to compute both the maximum value in a tensor and the index of that value, it is
best to compute both in a single pass through memory, so it is most efficient to
implement this procedure as a single operation with two outputs.
We have not described how to control the memory consumption of back-
propagation. Back-propagation often involves summation of many tensors together.
In the naive approach, each of these tensors would be computed separately, then
all of them would be added in a second step. The naive approach has an overly
high memory bottleneck that can be avoided by maintaining a single buffer and
adding each value to that buffer as it is computed.
Real-world implementations of back-propagation also need to handle various
data types, such as 32-bit floating point, 64-bit floating point, and integer values.
The policy for handling each of these types takes special care to design.
Some operations have undefined gradients, and it is important to track these
cases and determine whether the gradient requested by the user is undefined.
Various other technicalities make real-world differentiation more complicated.
These technicalities are not insurmountable, and this chapter has described the key
intellectual tools needed to compute derivatives, but it is important to be aware
that many more subtleties exist.
6.5.9 Differentiation outside the Deep Learning Community
The deep learning community has been somewhat isolated from the broader
computer science community and has largely developed its own cultural attitudes
221
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
concerning how to perform differentiation. More generally, the field of automatic
differentiation is concerned with how to compute derivatives algorithmically.
The back-propagation algorithm described here is only one approach to automatic
differentiation. It is a special case of a broader class of techniques called reverse
mode accumulation. Other approaches evaluate the subexpressions of the chain
rule in different orders. In general, determining the order of evaluation that
results in the lowest computational cost is a difficult problem. Finding the optimal
sequence of operations to compute the gradient is NP-complete ( , ),
Naumann 2008
in the sense that it may require simplifying algebraic expressions into their least
expensive form.
For example, suppose we have variables p1, p2, . . . , pn representing probabilities
and variables z1, z2 , . . . , zn representing unnormalized log probabilities. Suppose
we define
qi =
exp(zi)

i exp(zi)
, (6.57)
where we build the softmax function out of exponentiation, summation and division
operations, and construct a cross-entropy loss J = −

i pi log qi. A human
mathematician can observe that the derivative of J with respect to zi takes a very
simple form: qi − pi. The back-propagation algorithm is not capable of simplifying
the gradient this way, and will instead explicitly propagate gradients through all of
the logarithm and exponentiation operations in the original graph. Some software
libraries such as Theano ( , ; , ) are able to
Bergstra et al. 2010 Bastien et al. 2012
perform some kinds of algebraic substitution to improve over the graph proposed
by the pure back-propagation algorithm.
When the forward graph G has a single output node and each partial derivative
∂u( )
i
∂u( )
j can be computed with a constant amount of computation, back-propagation
guarantees that the number of computations for the gradient computation is of
the same order as the number of computations for the forward computation: this
can be seen in algorithm because each local partial derivative
6.2 ∂u( )
i
∂u( )
j needs to
be computed only once along with an associated multiplication and addition for
the recursive chain-rule formulation (equation ). The overall computation is
6.49
therefore O(# edges). However, it can potentially be reduced by simplifying the
computational graph constructed by back-propagation, and this is an NP-complete
task. Implementations such as Theano and TensorFlow use heuristics based on
matching known simplification patterns in order to iteratively attempt to simplify
the graph. We defined back-propagation only for the computation of a gradient of a
scalar output but back-propagation can be extended to compute a Jacobian (either
of k different scalar nodes in the graph, or of a tensor-valued node containing k
values). A naive implementation may then need k times more computation: for
222
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
each scalar internal node in the original forward graph, the naive implementation
computes k gradients instead of a single gradient. When the number of outputs of
the graph is larger than the number of inputs, it is sometimes preferable to use
another form of automatic differentiation called forward mode accumulation.
Forward mode computation has been proposed for obtaining real-time computation
of gradients in recurrent networks, for example ( , ). This
Williams and Zipser 1989
also avoids the need to store the values and gradients for the whole graph, trading
off computational efficiency for memory. The relationship between forward mode
and backward mode is analogous to the relationship between left-multiplying versus
right-multiplying a sequence of matrices, such as
ABCD, (6.58)
where the matrices can be thought of as Jacobian matrices. For example, if D
is a column vector while A has many rows, this corresponds to a graph with a
single output and many inputs, and starting the multiplications from the end
and going backwards only requires matrix-vector products. This corresponds to
the backward mode. Instead, starting to multiply from the left would involve a
series of matrix-matrix products, which makes the whole computation much more
expensive. However, if A has fewer rows than D has columns, it is cheaper to run
the multiplications left-to-right, corresponding to the forward mode.
In many communities outside of machine learning, it is more common to im-
plement differentiation software that acts directly on traditional programming
language code, such as Python or C code, and automatically generates programs
that differentiate functions written in these languages. In the deep learning com-
munity, computational graphs are usually represented by explicit data structures
created by specialized libraries. The specialized approach has the drawback of
requiring the library developer to define the bprop methods for every operation
and limiting the user of the library to only those operations that have been defined.
However, the specialized approach also has the benefit of allowing customized
back-propagation rules to be developed for each operation, allowing the developer
to improve speed or stability in non-obvious ways that an automatic procedure
would presumably be unable to replicate.
Back-propagation is therefore not the only way or the optimal way of computing
the gradient, but it is a very practical method that continues to serve the deep
learning community very well. In the future, differentiation technology for deep
networks may improve as deep learning practitioners become more aware of advances
in the broader field of automatic differentiation.
223
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
6.5.10 Higher-Order Derivatives
Some software frameworks support the use of higher-order derivatives. Among the
deep learning software frameworks, this includes at least Theano and TensorFlow.
These libraries use the same kind of data structure to describe the expressions for
derivatives as they use to describe the original function being differentiated. This
means that the symbolic differentiation machinery can be applied to derivatives.
In the context of deep learning, it is rare to compute a single second derivative
of a scalar function. Instead, we are usually interested in properties of the Hessian
matrix. If we have a function f : Rn → R, then the Hessian matrix is of size n n
× .
In typical deep learning applications, n will be the number of parameters in the
model, which could easily number in the billions. The entire Hessian matrix is
thus infeasible to even represent.
Instead of explicitly computing the Hessian, the typical deep learning approach
is to use Krylov methods. Krylov methods are a set of iterative techniques for
performing various operations like approximately inverting a matrix or finding
approximations to its eigenvectors or eigenvalues, without using any operation
other than matrix-vector products.
In order to use Krylov methods on the Hessian, we only need to be able to
compute the product between the Hessian matrix H and an arbitrary vector v. A
straightforward technique ( , ) for doing so is to compute
Christianson 1992
Hv = ∇x

(∇xf x
( ))
v

. (6.59)
Both of the gradient computations in this expression may be computed automati-
cally by the appropriate software library. Note that the outer gradient expression
takes the gradient of a function of the inner gradient expression.
If v is itself a vector produced by a computational graph, it is important to
specify that the automatic differentiation software should not differentiate through
the graph that produced .
v
While computing the Hessian is usually not advisable, it is possible to do with
Hessian vector products. One simply computes He( )
i
for all i = 1, . . . , n, where
e( )
i
is the one-hot vector with e
( )
i
i = 1 and all other entries equal to 0.
6.6 Historical Notes
Feedforward networks can be seen as efficient nonlinear function approximators
based on using gradient descent to minimize the error in a function approximation.
224
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
From this point of view, the modern feedforward network is the culmination of
centuries of progress on the general function approximation task.
The chain rule that underlies the back-propagation algorithm was invented
in the 17th century ( , ; , ). Calculus and algebra have
Leibniz 1676 L’Hôpital 1696
long been used to solve optimization problems in closed form, but gradient descent
was not introduced as a technique for iteratively approximating the solution to
optimization problems until the 19th century (Cauchy 1847
, ).
Beginning in the 1940s, these function approximation techniques were used to
motivate machine learning models such as the perceptron. However, the earliest
models were based on linear models. Critics including Marvin Minsky pointed out
several of the flaws of the linear model family, such as its inability to learn the
XOR function, which led to a backlash against the entire neural network approach.
Learning nonlinear functions required the development of a multilayer per-
ceptron and a means of computing the gradient through such a model. Efficient
applications of the chain rule based on dynamic programming began to appear
in the 1960s and 1970s, mostly for control applications ( , ;
Kelley 1960 Bryson and
Denham 1961 Dreyfus 1962 Bryson and Ho 1969 Dreyfus 1973
, ; , ; , ; , ) but also for
sensitivity analysis ( , ).
Linnainmaa 1976 Werbos 1981
( ) proposed applying these
techniques to training artificial neural networks. The idea was finally developed
in practice after being independently rediscovered in different ways ( , ;
LeCun 1985
Parker 1985 Rumelhart 1986a
, ; et al., ). The book Parallel Distributed Pro-
cessing presented the results of some of the first successful experiments with
back-propagation in a chapter ( , ) that contributed greatly
Rumelhart et al. 1986b
to the popularization of back-propagation and initiated a very active period of
research in multi-layer neural networks. However, the ideas put forward by the
authors of that book and in particular by Rumelhart and Hinton go much beyond
back-propagation. They include crucial ideas about the possible computational
implementation of several central aspects of cognition and learning, which came
under the name of “connectionism” because of the importance this school of thought
places on the connections between neurons as the locus of learning and memory.
In particular, these ideas include the notion of distributed representation (Hinton
et al., ).
1986
Following the success of back-propagation, neural network research gained pop-
ularity and reached a peak in the early 1990s. Afterwards, other machine learning
techniques became more popular until the modern deep learning renaissance that
began in 2006.
The core ideas behind modern feedforward networks have not changed sub-
stantially since the 1980s. The same back-propagation algorithm and the same
225
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
approaches to gradient descent are still in use. Most of the improvement in neural
network performance from 1986 to 2015 can be attributed to two factors. First,
larger datasets have reduced the degree to which statistical generalization is a
challenge for neural networks. Second, neural networks have become much larger,
due to more powerful computers, and better software infrastructure. However, a
small number of algorithmic changes have improved the performance of neural
networks noticeably.
One of these algorithmic changes was the replacement of mean squared error
with the cross-entropy family of loss functions. Mean squared error was popular in
the 1980s and 1990s, but was gradually replaced by cross-entropy losses and the
principle of maximum likelihood as ideas spread between the statistics community
and the machine learning community. The use of cross-entropy losses greatly
improved the performance of models with sigmoid and softmax outputs, which
had previously suffered from saturation and slow learning when using the mean
squared error loss.
The other major algorithmic change that has greatly improved the performance
of feedforward networks was the replacement of sigmoid hidden units with piecewise
linear hidden units, such as rectified linear units. Rectification using the max{0, z}
function was introduced in early neural network models and dates back at least
as far as the Cognitron and Neocognitron (Fukushima 1975 1980
, , ). These early
models did not use rectified linear units, but instead applied rectification to
nonlinear functions. Despite the early popularity of rectification, rectification was
largely replaced by sigmoids in the 1980s, perhaps because sigmoids perform better
when neural networks are very small. As of the early 2000s, rectified linear units
were avoided due to a somewhat superstitious belief that activation functions with
non-differentiable points must be avoided. This began to change in about 2009.
Jarrett 2009
et al. ( ) observed that “using a rectifying nonlinearity is the single most
important factor in improving the performance of a recognition system” among
several different factors of neural network architecture design.
For small datasets, ( ) observed that using rectifying non-
Jarrett et al. 2009
linearities is even more important than learning the weights of the hidden layers.
Random weights are sufficient to propagate useful information through a rectified
linear network, allowing the classifier layer at the top to learn how to map different
feature vectors to class identities.
When more data is available, learning begins to extract enough useful knowledge
to exceed the performance of randomly chosen parameters. ( )
Glorot et al. 2011a
showed that learning is far easier in deep rectified linear networks than in deep
networks that have curvature or two-sided saturation in their activation functions.
226
CHAPTER 6. DEEP FEEDFORWARD NETWORKS
Rectified linear units are also of historical interest because they show that
neuroscience has continued to have an influence on the development of deep
learning algorithms. ( ) motivate rectified linear units from
Glorot et al. 2011a
biological considerations. The half-rectifying nonlinearity was intended to capture
these properties of biological neurons: 1) For some inputs, biological neurons are
completely inactive. 2) For some inputs, a biological neuron’s output is proportional
to its input. 3) Most of the time, biological neurons operate in the regime where
they are inactive (i.e., they should have sparse activations).
When the modern resurgence of deep learning began in 2006, feedforward
networks continued to have a bad reputation. From about 2006-2012, it was widely
believed that feedforward networks would not perform well unless they were assisted
by other models, such as probabilistic models. Today, it is now known that with the
right resources and engineering practices, feedforward networks perform very well.
Today, gradient-based learning in feedforward networks is used as a tool to develop
probabilistic models, such as the variational autoencoder and generative adversarial
networks, described in chapter . Rather than being viewed as an unreliable
20
technology that must be supported by other techniques, gradient-based learning in
feedforward networks has been viewed since 2012 as a powerful technology that
may be applied to many other machine learning tasks. In 2006, the community
used unsupervised learning to support supervised learning, and now, ironically, it
is more common to use supervised learning to support unsupervised learning.
Feedforward networks continue to have unfulfilled potential. In the future, we
expect they will be applied to many more tasks, and that advances in optimization
algorithms and model design will improve their performance even further. This
chapter has primarily described the neural network family of models. In the
subsequent chapters, we turn to how to use these models—how to regularize and
train them.
227
Chapter 7
Regularization for Deep Learning
A central problem in machine learning is how to make an algorithm that will
perform well not just on the training data, but also on new inputs. Many strategies
used in machine learning are explicitly designed to reduce the test error, possibly
at the expense of increased training error. These strategies are known collectively
as regularization. As we will see there are a great many forms of regularization
available to the deep learning practitioner. In fact, developing more effective
regularization strategies has been one of the major research efforts in the field.
Chapter introduced the basic concepts of generalization, underfitting, overfit-
5
ting, bias, variance and regularization. If you are not already familiar with these
notions, please refer to that chapter before continuing with this one.
In this chapter, we describe regularization in more detail, focusing on regular-
ization strategies for deep models or models that may be used as building blocks
to form deep models.
Some sections of this chapter deal with standard concepts in machine learning.
If you are already familiar with these concepts, feel free to skip the relevant
sections. However, most of this chapter is concerned with the extension of these
basic concepts to the particular case of neural networks.
In section , we defined regularization as “any modification we make to
5.2.2
a learning algorithm that is intended to reduce its generalization error but not
its training error.” There are many regularization strategies. Some put extra
constraints on a machine learning model, such as adding restrictions on the
parameter values. Some add extra terms in the objective function that can be
thought of as corresponding to a soft constraint on the parameter values. If chosen
carefully, these extra constraints and penalties can lead to improved performance
228
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
on the test set. Sometimes these constraints and penalties are designed to encode
specific kinds of prior knowledge. Other times, these constraints and penalties
are designed to express a generic preference for a simpler model class in order to
promote generalization. Sometimes penalties and constraints are necessary to make
an underdetermined problem determined. Other forms of regularization, known as
ensemble methods, combine multiple hypotheses that explain the training data.
In the context of deep learning, most regularization strategies are based on
regularizing estimators. Regularization of an estimator works by trading increased
bias for reduced variance. An effective regularizer is one that makes a profitable
trade, reducing variance significantly while not overly increasing the bias. When we
discussed generalization and overfitting in chapter , we focused on three situations,
5
where the model family being trained either (1) excluded the true data generating
process—corresponding to underfitting and inducing bias, or (2) matched the true
data generating process, or (3) included the generating process but also many
other possible generating processes—the overfitting regime where variance rather
than bias dominates the estimation error. The goal of regularization is to take a
model from the third regime into the second regime.
In practice, an overly complex model family does not necessarily include the
target function or the true data generating process, or even a close approximation
of either. We almost never have access to the true data generating process so
we can never know for sure if the model family being estimated includes the
generating process or not. However, most applications of deep learning algorithms
are to domains where the true data generating process is almost certainly outside
the model family. Deep learning algorithms are typically applied to extremely
complicated domains such as images, audio sequences and text, for which the true
generation process essentially involves simulating the entire universe. To some
extent, we are always trying to fit a square peg (the data generating process) into
a round hole (our model family).
What this means is that controlling the complexity of the model is not a
simple matter of finding the model of the right size, with the right number of
parameters. Instead, we might find—and indeed in practical deep learning scenarios,
we almost always do find—that the best fitting model (in the sense of minimizing
generalization error) is a large model that has been regularized appropriately.
We now review several strategies for how to create such a large, deep, regularized
model.
229
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
7.1 Parameter Norm Penalties
Regularization has been used for decades prior to the advent of deep learning. Linear
models such as linear regression and logistic regression allow simple, straightforward,
and effective regularization strategies.
Many regularization approaches are based on limiting the capacity of models,
such as neural networks, linear regression, or logistic regression, by adding a pa-
rameter norm penalty Ω(θ) to the objective function J. We denote the regularized
objective function by ˜
J:
˜
J , J , α
( ;
θ X y) = ( ;
θ X y) + Ω( )
θ (7.1)
where α ∈ [0, ∞) is a hyperparameter that weights the relative contribution of the
norm penalty term, , relative to the standard objective function
Ω J. Setting α to 0
results in no regularization. Larger values of α correspond to more regularization.
When our training algorithm minimizes the regularized objective function ˜
J it
will decrease both the original objective J on the training data and some measure
of the size of the parameters θ (or some subset of the parameters). Different
choices for the parameter norm can result in different solutions being preferred.
Ω
In this section, we discuss the effects of the various norms when used as penalties
on the model parameters.
Before delving into the regularization behavior of different norms, we note that
for neural networks, we typically choose to use a parameter norm penalty that
Ω
penalizes of the affine transformation at each layer and leaves
only the weights
the biases unregularized. The biases typically require less data to fit accurately
than the weights. Each weight specifies how two variables interact. Fitting the
weight well requires observing both variables in a variety of conditions. Each
bias controls only a single variable. This means that we do not induce too much
variance by leaving the biases unregularized. Also, regularizing the bias parameters
can introduce a significant amount of underfitting. We therefore use the vector w
to indicate all of the weights that should be affected by a norm penalty, while the
vector θ denotes all of the parameters, including both w and the unregularized
parameters.
In the context of neural networks, it is sometimes desirable to use a separate
penalty with a different α coefficient for each layer of the network. Because it can
be expensive to search for the correct value of multiple hyperparameters, it is still
reasonable to use the same weight decay at all layers just to reduce the size of
search space.
230
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
7.1.1 L2
Parameter Regularization
We have already seen, in section , one of the simplest and most common kinds
5.2.2
of parameter norm penalty: the L2
parameter norm penalty commonly known as
weight decay. This regularization strategy drives the weights closer to the origin1
by adding a regularization term Ω(θ) = 1
2 
w 2
2 to the objective function. In other
academic communities, L2 regularization is also known as ridge regression or
Tikhonov regularization.
We can gain some insight into the behavior of weight decay regularization
by studying the gradient of the regularized objective function. To simplify the
presentation, we assume no bias parameter, so θ is just w. Such a model has the
following total objective function:
˜
J ,
( ;
w X y) =
α
2
w
w w X y
+ (
J ; , ), (7.2)
with the corresponding parameter gradient
∇w ˜
J , α
( ;
w X y) = w + ∇wJ , .
( ;
w X y) (7.3)
To take a single gradient step to update the weights, we perform this update:
w w w
← −  α
( + ∇wJ , .
( ;
w X y)) (7.4)
Written another way, the update is:
w w
← −
(1 α) − ∇
 wJ , .
( ;
w X y) (7.5)
We can see that the addition of the weight decay term has modified the learning
rule to multiplicatively shrink the weight vector by a constant factor on each step,
just before performing the usual gradient update. This describes what happens in
a single step. But what happens over the entire course of training?
We will further simplify the analysis by making a quadratic approximation
to the objective function in the neighborhood of the value of the weights that
obtains minimal unregularized training cost, w∗
= arg minw J(w). If the objective
function is truly quadratic, as in the case of fitting a linear regression model with
1
More generally, we could regularize the parameters to be near any specific point in space
and, surprisingly, still get a regularization effect, but better results will be obtained for a value
closer to the true one, with zero being a default value that makes sense when we do not know if
the correct value should be positive or negative. Since it is far more common to regularize the
model parameters towards zero, we will focus on this special case in our exposition.
231
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
mean squared error, then the approximation is perfect. The approximation ˆ
J is
given by
ˆ
J J
( ) =
θ (w∗
) +
1
2
(w w
− ∗
)
H w w
( − ∗
), (7.6)
where H is the Hessian matrix of J with respect to w evaluated at w∗. There is
no first-order term in this quadratic approximation, because w∗ is defined to be a
minimum, where the gradient vanishes. Likewise, because w∗ is the location of a
minimum of , we can conclude that is positive semidefinite.
J H
The minimum of ˆ
J occurs where its gradient
∇w ˆ
J( ) = (
w H w w
− ∗
) (7.7)
is equal to .
0
To study the effect of weight decay, we modify equation by adding the
7.7
weight decay gradient. We can now solve for the minimum of the regularized
version of Ĵ. We use the variable w̃ to represent the location of the minimum.
αw̃ H
+ (w̃ w
− ∗
) = 0 (7.8)
( + )
H αI w̃ Hw
= ∗
(7.9)
w̃ H I
= ( + α )−1
Hw∗
. (7.10)
As α approaches 0, the regularized solution w̃ approaches w∗
. But what
happens as α grows? Because H is real and symmetric, we can decompose it
into a diagonal matrix Λ and an orthonormal basis of eigenvectors, Q, such that
H Q Q
= Λ 
. Applying the decomposition to equation , we obtain:
7.10
w̃ Q Q
= ( Λ 
+ )
αI −1
Q Q
Λ 
w∗
(7.11)
=

Q I Q
( +
Λ α ) 
−1
Q Q
Λ 
w∗
(7.12)
= ( + )
Q Λ αI −1
ΛQ
w∗
. (7.13)
We see that the effect of weight decay is to rescale w∗ along the axes defined by
the eigenvectors of H. Specifically, the component of w∗ that is aligned with the
i-th eigenvector of H is rescaled by a factor of λi
λi +α. (You may wish to review
how this kind of scaling works, first explained in figure ).
2.3
Along the directions where the eigenvalues of H are relatively large, for example,
where λi  α, the effect of regularization is relatively small. However, components
with λi  α will be shrunk to have nearly zero magnitude. This effect is illustrated
in figure .
7.1
232
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
w1
w
2
w∗
w̃
Figure 7.1: An illustration of the effect ofL2 (or weight decay) regularization on the value
of the optimal w. The solid ellipses represent contours of equal value of the unregularized
objective. The dotted circles represent contours of equal value of theL2 regularizer. At
the point w̃, these competing objectives reach an equilibrium. In the first dimension, the
eigenvalue of the Hessian of J is small. The objective function does not increase much
when moving horizontally away from w∗. Because the objective function does not express
a strong preference along this direction, the regularizer has a strong effect on this axis.
The regularizer pulls w1 close to zero. In the second dimension, the objective function
is very sensitive to movements away from w∗
. The corresponding eigenvalue is large,
indicating high curvature. As a result, weight decay affects the position ofw2 relatively
little.
Only directions along which the parameters contribute significantly to reducing
the objective function are preserved relatively intact. In directions that do not
contribute to reducing the objective function, a small eigenvalue of the Hessian
tells us that movement in this direction will not significantly increase the gradient.
Components of the weight vector corresponding to such unimportant directions
are decayed away through the use of the regularization throughout training.
So far we have discussed weight decay in terms of its effect on the optimization
of an abstract, general, quadratic cost function. How do these effects relate to
machine learning in particular? We can find out by studying linear regression, a
model for which the true cost function is quadratic and therefore amenable to the
same kind of analysis we have used so far. Applying the analysis again, we will
be able to obtain a special case of the same results, but with the solution now
phrased in terms of the training data. For linear regression, the cost function is
233
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
the sum of squared errors:
( )
Xw y
− 
( )
Xw y
− . (7.14)
When we add L2
regularization, the objective function changes to
( )
Xw y
− 
( ) +
Xw y
−
1
2
αw
w. (7.15)
This changes the normal equations for the solution from
w X
= ( 
X)−1
X 
y (7.16)
to
w X
= ( 
X I
+ α )−1
X
y. (7.17)
The matrix XX in equation is proportional to the covariance matrix
7.16 1
m XX.
Using L2 regularization replaces this matrix with

XX I
+ α
−1
in equation .
7.17
The new matrix is the same as the original one, but with the addition of α to the
diagonal. The diagonal entries of this matrix correspond to the variance of each
input feature. We can see that L2 regularization causes the learning algorithm
to “perceive” the input X as having higher variance, which makes it shrink the
weights on features whose covariance with the output target is low compared to
this added variance.
7.1.2 L1
Regularization
While L2
weight decay is the most common form of weight decay, there are other
ways to penalize the size of the model parameters. Another option is to use L1
regularization.
Formally, L1 regularization on the model parameter is defined as:
w
Ω( ) =
θ || ||
w 1 =

i
|wi|, (7.18)
that is, as the sum of absolute values of the individual parameters.2
We will
now discuss the effect of L1 regularization on the simple linear regression model,
with no bias parameter, that we studied in our analysis of L2 regularization. In
particular, we are interested in delineating the differences between L1 and L2 forms
2
As with L2
regularization, we could regularize the parameters towards a value that is not
zero, but instead towards some parameter value w( )
o
. In that case the L
1
regularization would
introduce the term Ω( ) =
θ || −
w w( )
o
||1 =

i |wi − w
( )
o
i |.
234
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
of regularization. As with L2
weight decay, L1
weight decay controls the strength
of the regularization by scaling the penalty using a positive hyperparameter
Ω α.
Thus, the regularized objective function ˜
J ,
( ;
w X y) is given by
J̃ , α
( ;
w X y) = || ||
w 1 + ( ; )
J w X y
, , (7.19)
with the corresponding gradient (actually, sub-gradient):
∇w ˜
J , α
( ;
w X y) = sign( ) +
w ∇wJ ,
(X y w
; ) (7.20)
where is simply the sign of applied element-wise.
sign( )
w w
By inspecting equation , we can see immediately that the effect of
7.20 L1
regularization is quite different from that of L2
regularization. Specifically, we can
see that the regularization contribution to the gradient no longer scales linearly
with each wi; instead it is a constant factor with a sign equal to sign(wi). One
consequence of this form of the gradient is that we will not necessarily see clean
algebraic solutions to quadratic approximations of J(X y
, ;w) as we did for L2
regularization.
Our simple linear model has a quadratic cost function that we can represent
via its Taylor series. Alternately, we could imagine that this is a truncated Taylor
series approximating the cost function of a more sophisticated model. The gradient
in this setting is given by
∇w ˆ
J( ) = (
w H w w
− ∗
), (7.21)
where, again, is the Hessian matrix of with respect to evaluated at
H J w w∗.
Because the L1
penalty does not admit clean algebraic expressions in the case
of a fully general Hessian, we will also make the further simplifying assumption
that the Hessian is diagonal, H = diag([H1 1
, , . . . , Hn,n ]), where each Hi,i > 0.
This assumption holds if the data for the linear regression problem has been
preprocessed to remove all correlation between the input features, which may be
accomplished using PCA.
Our quadratic approximation of the L1 regularized objective function decom-
poses into a sum over the parameters:
Ĵ , J
( ;
w X y) = (w∗
; ) +
X y
,

i

1
2
Hi,i(wi − w∗
i )2
+ α w
| i|

. (7.22)
The problem of minimizing this approximate cost function has an analytical solution
(for each dimension ), with the following form:
i
wi = sign(w∗
i ) max

|w∗
i | −
α
Hi,i
, 0

. (7.23)
235
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
Consider the situation where w∗
i > i
0 for all . There are two possible outcomes:
1. The case where w∗
i ≤ α
Hi,i
. Here the optimal value of wi under the regularized
objective is simply wi = 0. This occurs because the contribution of J(w;X y
, )
to the regularized objective ˜
J(w; X y
, ) is overwhelmed—in direction i—by
the L1
regularization which pushes the value of wi to zero.
2. The case where w∗
i > α
Hi,i
. In this case, the regularization does not move the
optimal value of wi to zero but instead it just shifts it in that direction by a
distance equal to α
Hi,i
.
A similar process happens when w∗
i < 0, but with the L1 penalty making wi less
negative by α
Hi,i
, or 0.
In comparison to L2
regularization, L1
regularization results in a solution that
is more sparse. Sparsity in this context refers to the fact that some parameters
have an optimal value of zero. The sparsity of L1
regularization is a qualitatively
different behavior than arises with L2 regularization. Equation gave the
7.13
solution w̃ for L2 regularization. If we revisit that equation using the assumption
of a diagonal and positive definite Hessian H that we introduced for our analysis of
L1 regularization, we find that w̃i =
Hi,i
Hi,i+αw∗
i . If w∗
i was nonzero, then w̃i remains
nonzero. This demonstrates that L2
regularization does not cause the parameters
to become sparse, while L1
regularization may do so for large enough .
α
The sparsity property induced by L1 regularization has been used extensively
as a feature selection mechanism. Feature selection simplifies a machine learning
problem by choosing which subset of the available features should be used. In
particular, the well known LASSO ( , ) (least absolute shrinkage and
Tibshirani 1995
selection operator) model integrates an L1
penalty with a linear model and a least
squares cost function. The L1
penalty causes a subset of the weights to become
zero, suggesting that the corresponding features may safely be discarded.
In section , we saw that many regularization strategies can be interpreted
5.6.1
as MAP Bayesian inference, and that in particular, L2 regularization is equivalent
to MAP Bayesian inference with a Gaussian prior on the weights. For L1 regu-
larization, the penalty αΩ(w) = α

i |wi| used to regularize a cost function is
equivalent to the log-prior term that is maximized by MAP Bayesian inference
when the prior is an isotropic Laplace distribution (equation ) over
3.26 w ∈ Rn
:
log ( ) =
p w

i
log Laplace(wi; 0,
1
α
) = − || ||
α w 1 + log log 2
n α n
− . (7.24)
236
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
From the point of view of learning via maximization with respect to w, we can
ignore the terms because they do not depend on .
log log 2
α − w
7.2 Norm Penalties as Constrained Optimization
Consider the cost function regularized by a parameter norm penalty:
˜
J , J , α .
( ;
θ X y) = ( ;
θ X y) + Ω( )
θ (7.25)
Recall from section that we can minimize a function subject to constraints
4.4
by constructing a generalized Lagrange function, consisting of the original objective
function plus a set of penalties. Each penalty is a product between a coefficient,
called a Karush–Kuhn–Tucker (KKT) multiplier, and a function representing
whether the constraint is satisfied. If we wanted to constrain Ω(θ) to be less than
some constant , we could construct a generalized Lagrange function
k
L −
( ; ) = ( ; ) + (Ω( )
θ, α X y
, J θ X y
, α θ k .
) (7.26)
The solution to the constrained problem is given by
θ∗
= arg min
θ
max
α,α≥0
L( )
θ, α . (7.27)
As described in section , solving this problem requires modifying both
4.4 θ
and α. Section provides a worked example of linear regression with an
4.5 L2
constraint. Many different procedures are possible—some may use gradient descent,
while others may use analytical solutions for where the gradient is zero—but in all
procedures α must increase whenever Ω(θ) > k and decrease whenever Ω(θ) < k.
All positive α encourage Ω(θ) to shrink. The optimal value α∗
will encourage Ω(θ)
to shrink, but not so strongly to make become less than .
Ω( )
θ k
To gain some insight into the effect of the constraint, we can fix α∗ and view
the problem as just a function of :
θ
θ∗
= arg min
θ
L(θ, α∗
) = arg min
θ
J , α
( ;
θ X y) + ∗
Ω( )
θ . (7.28)
This is exactly the same as the regularized training problem of minimizing ˜
J.
We can thus think of a parameter norm penalty as imposing a constraint on the
weights. If is the
Ω L2
norm, then the weights are constrained to lie in an L2
ball. If is the
Ω L1
norm, then the weights are constrained to lie in a region of
237
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
limited L1
norm. Usually we do not know the size of the constraint region that we
impose by using weight decay with coefficient α∗
because the value of α∗
does not
directly tell us the value of k. In principle, one can solve for k, but the relationship
between k and α∗
depends on the form of J. While we do not know the exact size
of the constraint region, we can control it roughly by increasing or decreasing α
in order to grow or shrink the constraint region. Larger α will result in a smaller
constraint region. Smaller will result in a larger constraint region.
α
Sometimes we may wish to use explicit constraints rather than penalties. As
described in section , we can modify algorithms such as stochastic gradient
4.4
descent to take a step downhill on J (θ) and then project θ back to the nearest
point that satisfies Ω(θ) < k. This can be useful if we have an idea of what value
of k is appropriate and do not want to spend time searching for the value of α that
corresponds to this .
k
Another reason to use explicit constraints and reprojection rather than enforcing
constraints with penalties is that penalties can cause non-convex optimization
procedures to get stuck in local minima corresponding to small θ. When training
neural networks, this usually manifests as neural networks that train with several
“dead units.” These are units that do not contribute much to the behavior of the
function learned by the network because the weights going into or out of them are
all very small. When training with a penalty on the norm of the weights, these
configurations can be locally optimal, even if it is possible to significantly reduce
J by making the weights larger. Explicit constraints implemented by re-projection
can work much better in these cases because they do not encourage the weights
to approach the origin. Explicit constraints implemented by re-projection only
have an effect when the weights become large and attempt to leave the constraint
region.
Finally, explicit constraints with reprojection can be useful because they impose
some stability on the optimization procedure. When using high learning rates, it
is possible to encounter a positive feedback loop in which large weights induce
large gradients which then induce a large update to the weights. If these updates
consistently increase the size of the weights, then θ rapidly moves away from
the origin until numerical overflow occurs. Explicit constraints with reprojection
prevent this feedback loop from continuing to increase the magnitude of the weights
without bound. ( ) recommend using constraints combined with
Hinton et al. 2012c
a high learning rate to allow rapid exploration of parameter space while maintaining
some stability.
In particular, Hinton 2012c
et al. ( ) recommend a strategy introduced by Srebro
and Shraibman 2005
( ): constraining the norm of each column of the weight matrix
238
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
of a neural net layer, rather than constraining the Frobenius norm of the entire
weight matrix. Constraining the norm of each column separately prevents any one
hidden unit from having very large weights. If we converted this constraint into a
penalty in a Lagrange function, it would be similar to L2
weight decay but with a
separate KKT multiplier for the weights of each hidden unit. Each of these KKT
multipliers would be dynamically updated separately to make each hidden unit
obey the constraint. In practice, column norm limitation is always implemented as
an explicit constraint with reprojection.
7.3 Regularization and Under-Constrained Problems
In some cases, regularization is necessary for machine learning problems to be prop-
erly defined. Many linear models in machine learning, including linear regression
and PCA, depend on inverting the matrix X 
X. This is not possible whenever
X
X is singular. This matrix can be singular whenever the data generating distri-
bution truly has no variance in some direction, or when no variance is observed in
some direction because there are fewer examples (rows of X) than input features
(columns of X ). In this case, many forms of regularization correspond to inverting
X
X I
+ α instead. This regularized matrix is guaranteed to be invertible.
These linear problems have closed form solutions when the relevant matrix
is invertible. It is also possible for a problem with no closed form solution to be
underdetermined. An example is logistic regression applied to a problem where
the classes are linearly separable. If a weight vector w is able to achieve perfect
classification, then 2w will also achieve perfect classification and higher likelihood.
An iterative optimization procedure like stochastic gradient descent will continually
increase the magnitude of w and, in theory, will never halt. In practice, a numerical
implementation of gradient descent will eventually reach sufficiently large weights
to cause numerical overflow, at which point its behavior will depend on how the
programmer has decided to handle values that are not real numbers.
Most forms of regularization are able to guarantee the convergence of iterative
methods applied to underdetermined problems. For example, weight decay will
cause gradient descent to quit increasing the magnitude of the weights when the
slope of the likelihood is equal to the weight decay coefficient.
The idea of using regularization to solve underdetermined problems extends
beyond machine learning. The same idea is useful for several basic linear algebra
problems.
As we saw in section , we can solve underdetermined linear equations using
2.9
239
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
the Moore-Penrose pseudoinverse. Recall that one definition of the pseudoinverse
X+
of a matrix is
X
X+
= lim
α0
(X
X I
+ α )−1
X
. (7.29)
We can now recognize equation as performing linear regression with weight
7.29
decay. Specifically, equation is the limit of equation as the regularization
7.29 7.17
coefficient shrinks to zero. We can thus interpret the pseudoinverse as stabilizing
underdetermined problems using regularization.
7.4 Dataset Augmentation
The best way to make a machine learning model generalize better is to train it on
more data. Of course, in practice, the amount of data we have is limited. One way
to get around this problem is to create fake data and add it to the training set.
For some machine learning tasks, it is reasonably straightforward to create new
fake data.
This approach is easiest for classification. A classifier needs to take a compli-
cated, high dimensional input x and summarize it with a single category identity y.
This means that the main task facing a classifier is to be invariant to a wide variety
of transformations. We can generate new (x, y) pairs easily just by transforming
the inputs in our training set.
x
This approach is not as readily applicable to many other tasks. For example, it
is difficult to generate new fake data for a density estimation task unless we have
already solved the density estimation problem.
Dataset augmentation has been a particularly effective technique for a specific
classification problem: object recognition. Images are high dimensional and include
an enormous variety of factors of variation, many of which can be easily simulated.
Operations like translating the training images a few pixels in each direction can
often greatly improve generalization, even if the model has already been designed to
be partially translation invariant by using the convolution and pooling techniques
described in chapter . Many other operations such as rotating the image or scaling
9
the image have also proven quite effective.
One must be careful not to apply transformations that would change the correct
class. For example, optical character recognition tasks require recognizing the
difference between ‘b’ and ‘d’ and the difference between ‘6’ and ‘9’, so horizontal
flips and 180◦
rotations are not appropriate ways of augmenting datasets for these
tasks.
240
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
There are also transformations that we would like our classifiers to be invariant
to, but which are not easy to perform. For example, out-of-plane rotation can not
be implemented as a simple geometric operation on the input pixels.
Dataset augmentation is effective for speech recognition tasks as well (Jaitly
and Hinton 2013
, ).
Injecting noise in the input to a neural network (Sietsma and Dow 1991
, )
can also be seen as a form of data augmentation. For many classification and
even some regression tasks, the task should still be possible to solve even if small
random noise is added to the input. Neural networks prove not to be very robust
to noise, however (Tang and Eliasmith 2010
, ). One way to improve the robustness
of neural networks is simply to train them with random noise applied to their
inputs. Input noise injection is part of some unsupervised learning algorithms such
as the denoising autoencoder (Vincent 2008
et al., ). Noise injection also works
when the noise is applied to the hidden units, which can be seen as doing dataset
augmentation at multiple levels of abstraction. Poole 2014
et al. ( ) recently showed
that this approach can be highly effective provided that the magnitude of the
noise is carefully tuned. Dropout, a powerful regularization strategy that will be
described in section , can be seen as a process of constructing new inputs by
7.12
multiplying by noise.
When comparing machine learning benchmark results, it is important to take
the effect of dataset augmentation into account. Often, hand-designed dataset
augmentation schemes can dramatically reduce the generalization error of a machine
learning technique. To compare the performance of one machine learning algorithm
to another, it is necessary to perform controlled experiments. When comparing
machine learning algorithm A and machine learning algorithm B, it is necessary
to make sure that both algorithms were evaluated using the same hand-designed
dataset augmentation schemes. Suppose that algorithm A performs poorly with
no dataset augmentation and algorithm B performs well when combined with
numerous synthetic transformations of the input. In such a case it is likely the
synthetic transformations caused the improved performance, rather than the use
of machine learning algorithm B. Sometimes deciding whether an experiment
has been properly controlled requires subjective judgment. For example, machine
learning algorithms that inject noise into the input are performing a form of dataset
augmentation. Usually, operations that are generally applicable (such as adding
Gaussian noise to the input) are considered part of the machine learning algorithm,
while operations that are specific to one application domain (such as randomly
cropping an image) are considered to be separate pre-processing steps.
241
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
7.5 Noise Robustness
Section has motivated the use of noise applied to the inputs as a dataset
7.4
augmentation strategy. For some models, the addition of noise with infinitesimal
variance at the input of the model is equivalent to imposing a penalty on the
norm of the weights ( , , ). In the general case, it is important to
Bishop 1995a b
remember that noise injection can be much more powerful than simply shrinking
the parameters, especially when the noise is added to the hidden units. Noise
applied to the hidden units is such an important topic that it merit its own separate
discussion; the dropout algorithm described in section is the main development
7.12
of that approach.
Another way that noise has been used in the service of regularizing models
is by adding it to the weights. This technique has been used primarily in the
context of recurrent neural networks ( , ;
Jim et al. 1996 Graves 2011
, ). This can
be interpreted as a stochastic implementation of Bayesian inference over the
weights. The Bayesian treatment of learning would consider the model weights
to be uncertain and representable via a probability distribution that reflects this
uncertainty. Adding noise to the weights is a practical, stochastic way to reflect
this uncertainty.
Noise applied to the weights can also be interpreted as equivalent (under some
assumptions) to a more traditional form of regularization, encouraging stability of
the function to be learned. Consider the regression setting, where we wish to train
a function ŷ(x) that maps a set of features x to a scalar using the least-squares
cost function between the model predictions ŷ( )
x and the true values :
y
J = Ep x,y
( )

(ŷ y
( )
x − )2

. (7.30)
The training set consists of labeled examples
m {(x(1), y(1)) (
, . . . , x( )
m , y( )
m )}.
We now assume that with each input presentation we also include a random
perturbation W ∼ N(; 0, ηI) of the network weights. Let us imagine that we
have a standard l-layer MLP. We denote the perturbed model as ŷ
W (x). Despite
the injection of noise, we are still interested in minimizing the squared error of the
output of the network. The objective function thus becomes:
˜
JW = Ep ,y,
(x W )

(ŷW ( ) )
x − y 2

(7.31)
= Ep ,y,
(x W )

ŷ2
W
( ) 2 ˆ
x − yyW
( ) +
x y2
. (7.32)
For small η, the minimization of J with added weight noise (with covariance
ηI) is equivalent to minimization of J with an additional regularization term:
242
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
ηEp ,y
(x )

∇W ŷ( )
x 2

. This form of regularization encourages the parameters to
go to regions of parameter space where small perturbations of the weights have
a relatively small influence on the output. In other words, it pushes the model
into regions where the model is relatively insensitive to small variations in the
weights, finding points that are not merely minima, but minima surrounded by
flat regions (Hochreiter and Schmidhuber 1995
, ). In the simplified case of linear
regression (where, for instance, ŷ(x) = wx+ b), this regularization term collapses
into ηEp( )
x

 
x 2

, which is not a function of parameters and therefore does not
contribute to the gradient of ˜
JW with respect to the model parameters.
7.5.1 Injecting Noise at the Output Targets
Most datasets have some amount of mistakes in the y labels. It can be harmful to
maximize log p(y | x) when y is a mistake. One way to prevent this is to explicitly
model the noise on the labels. For example, we can assume that for some small
constant , the training set label y is correct with probability 1− , and otherwise
any of the other possible labels might be correct. This assumption is easy to
incorporate into the cost function analytically, rather than by explicitly drawing
noise samples. For example, label smoothing regularizes a model based on a
softmax with k output values by replacing the hard and classification targets
0 1
with targets of 
k−1 and 1− , respectively. The standard cross-entropy loss may
then be used with these soft targets. Maximum likelihood learning with a softmax
classifier and hard targets may actually never converge—the softmax can never
predict a probability of exactly or exactly , so it will continue to learn larger
0 1
and larger weights, making more extreme predictions forever. It is possible to
prevent this scenario using other regularization strategies like weight decay. Label
smoothing has the advantage of preventing the pursuit of hard probabilities without
discouraging correct classification. This strategy has been used since the 1980s
and continues to be featured prominently in modern neural networks (Szegedy
et al., ).
2015
7.6 Semi-Supervised Learning
In the paradigm of semi-supervised learning, both unlabeled examples from P(x)
and labeled examples from P (x y
, ) are used to estimate P (y x
| ) or predict y from
x.
In the context of deep learning, semi-supervised learning usually refers to
learning a representation h = f (x). The goal is to learn a representation so
243
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
that examples from the same class have similar representations. Unsupervised
learning can provide useful cues for how to group examples in representation
space. Examples that cluster tightly in the input space should be mapped to
similar representations. A linear classifier in the new space may achieve better
generalization in many cases (Belkin and Niyogi 2002 Chapelle 2003
, ; et al., ). A
long-standing variant of this approach is the application of principal components
analysis as a pre-processing step before applying a classifier (on the projected
data).
Instead of having separate unsupervised and supervised components in the
model, one can construct models in which a generative model of either P (x) or
P(x y
, ) shares parameters with a discriminative model of P(y x
| ). One can
then trade-off the supervised criterion − log P(y x
| ) with the unsupervised or
generative one (such as − log P (x) or − log P(x y
, )). The generative criterion then
expresses a particular form of prior belief about the solution to the supervised
learning problem ( , ), namely that the structure of
Lasserre et al. 2006 P(x) is
connected to the structure of P(y x
| ) in a way that is captured by the shared
parametrization. By controlling how much of the generative criterion is included
in the total criterion, one can find a better trade-off than with a purely generative
or a purely discriminative training criterion ( , ;
Lasserre et al. 2006 Larochelle and
Bengio 2008
, ).
Salakhutdinov and Hinton 2008
( ) describe a method for learning the kernel
function of a kernel machine used for regression, in which the usage of unlabeled
examples for modeling improves quite significantly.
P ( )
x P ( )
y x
|
See ( ) for more information about semi-supervised learning.
Chapelle et al. 2006
7.7 Multi-Task Learning
Multi-task learning ( , ) is a way to improve generalization by pooling
Caruana 1993
the examples (which can be seen as soft constraints imposed on the parameters)
arising out of several tasks. In the same way that additional training examples
put more pressure on the parameters of the model towards values that generalize
well, when part of a model is shared across tasks, that part of the model is more
constrained towards good values (assuming the sharing is justified), often yielding
better generalization.
Figure illustrates a very common form of multi-task learning, in which
7.2
different supervised tasks (predicting y( )
i given x) share the same input x, as well
as some intermediate-level representation h(shared) capturing a common pool of
244
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
factors. The model can generally be divided into two kinds of parts and associated
parameters:
1. Task-specific parameters (which only benefit from the examples of their task
to achieve good generalization). These are the upper layers of the neural
network in figure .
7.2
2. Generic parameters, shared across all the tasks (which benefit from the
pooled data of all the tasks). These are the lower layers of the neural network
in figure .
7.2
h(1)
h(1)
h(2)
h(2)
h(3)
h(3)
y(1)
y(1)
y(2)
y(2)
h(shared)
h(shared)
x
x
Figure 7.2: Multi-task learning can be cast in several ways in deep learning frameworks
and this figure illustrates the common situation where the tasks share a common input but
involve different target random variables. The lower layers of a deep network (whether it
is supervised and feedforward or includes a generative component with downward arrows)
can be shared across such tasks, while task-specific parameters (associated respectively
with the weights into and from h(1) and h(2)) can be learned on top of those yielding a
shared representation h(shared). The underlying assumption is that there exists a common
pool of factors that explain the variations in the inputx, while each task is associated
with a subset of these factors. In this example, it is additionally assumed that top-level
hidden units h(1) and h(2) are specialized to each task (respectively predicting y(1) and
y(2)) while some intermediate-level representationh(shared) is shared across all tasks. In
the unsupervised learning context, it makes sense for some of the top-level factors to be
associated with none of the output tasks (h(3)
): these are the factors that explain some of
the input variations but are not relevant for predicting y(1)
or y(2)
.
Improved generalization and generalization error bounds ( , ) can be
Baxter 1995
achieved because of the shared parameters, for which statistical strength can be
245
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
0 50 100 150 200 250
Time (epochs)
0 00
.
0 05
.
0 10
.
0 15
.
0 20
.
Loss
(negative
log-likelihood)
Training set loss
Validation set loss
Figure 7.3: Learning curves showing how the negative log-likelihood loss changes over
time (indicated as number of training iterations over the dataset, or epochs). In this
example, we train a maxout network on MNIST. Observe that the training objective
decreases consistently over time, but the validation set average loss eventually begins to
increase again, forming an asymmetric U-shaped curve.
greatly improved (in proportion with the increased number of examples for the
shared parameters, compared to the scenario of single-task models). Of course this
will happen only if some assumptions about the statistical relationship between
the different tasks are valid, meaning that there is something shared across some
of the tasks.
From the point of view of deep learning, the underlying prior belief is the
following: among the factors that explain the variations observed in the data
associated with the different tasks, some are shared across two or more tasks.
7.8 Early Stopping
When training large models with sufficient representational capacity to overfit
the task, we often observe that training error decreases steadily over time, but
validation set error begins to rise again. See figure for an example of this
7.3
behavior. This behavior occurs very reliably.
This means we can obtain a model with better validation set error (and thus,
hopefully better test set error) by returning to the parameter setting at the point in
time with the lowest validation set error. Every time the error on the validation set
improves, we store a copy of the model parameters. When the training algorithm
terminates, we return these parameters, rather than the latest parameters. The
246
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
algorithm terminates when no parameters have improved over the best recorded
validation error for some pre-specified number of iterations. This procedure is
specified more formally in algorithm .
7.1
Algorithm 7.1 The early stopping meta-algorithm for determining the best
amount of time to train. This meta-algorithm is a general strategy that works
well with a variety of training algorithms and ways of quantifying error on the
validation set.
Let be the number of steps between evaluations.
n
Let p be the “patience,” the number of times to observe worsening validation set
error before giving up.
Let θo be the initial parameters.
θ θ
← o
i ← 0
j ← 0
v ← ∞
θ∗
← θ
i∗
← i
while do
j < p
Update by running the training algorithm for steps.
θ n
i i n
← +
v ← ValidationSetError( )
θ
if v < v then
j ← 0
θ∗ ← θ
i∗ ← i
v v
← 
else
j j
← + 1
end if
end while
Best parameters are θ∗
, best number of training steps is i∗
This strategy is known as early stopping. It is probably the most commonly
used form of regularization in deep learning. Its popularity is due both to its
effectiveness and its simplicity.
One way to think of early stopping is as a very efficient hyperparameter selection
algorithm. In this view, the number of training steps is just another hyperparameter.
We can see in figure that this hyperparameter has a U-shaped validation set
7.3
247
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
performance curve. Most hyperparameters that control model capacity have such a
U-shaped validation set performance curve, as illustrated in figure . In the case of
5.3
early stopping, we are controlling the effective capacity of the model by determining
how many steps it can take to fit the training set. Most hyperparameters must be
chosen using an expensive guess and check process, where we set a hyperparameter
at the start of training, then run training for several steps to see its effect. The
“training time” hyperparameter is unique in that by definition a single run of
training tries out many values of the hyperparameter. The only significant cost
to choosing this hyperparameter automatically via early stopping is running the
validation set evaluation periodically during training. Ideally, this is done in
parallel to the training process on a separate machine, separate CPU, or separate
GPU from the main training process. If such resources are not available, then the
cost of these periodic evaluations may be reduced by using a validation set that is
small compared to the training set or by evaluating the validation set error less
frequently and obtaining a lower resolution estimate of the optimal training time.
An additional cost to early stopping is the need to maintain a copy of the
best parameters. This cost is generally negligible, because it is acceptable to store
these parameters in a slower and larger form of memory (for example, training in
GPU memory, but storing the optimal parameters in host memory or on a disk
drive). Since the best parameters are written to infrequently and never read during
training, these occasional slow writes have little effect on the total training time.
Early stopping is a very unobtrusive form of regularization, in that it requires
almost no change in the underlying training procedure, the objective function,
or the set of allowable parameter values. This means that it is easy to use early
stopping without damaging the learning dynamics. This is in contrast to weight
decay, where one must be careful not to use too much weight decay and trap the
network in a bad local minimum corresponding to a solution with pathologically
small weights.
Early stopping may be used either alone or in conjunction with other regulariza-
tion strategies. Even when using regularization strategies that modify the objective
function to encourage better generalization, it is rare for the best generalization to
occur at a local minimum of the training objective.
Early stopping requires a validation set, which means some training data is not
fed to the model. To best exploit this extra data, one can perform extra training
after the initial training with early stopping has completed. In the second, extra
training step, all of the training data is included. There are two basic strategies
one can use for this second training procedure.
One strategy (algorithm ) is to initialize the model again and retrain on all
7.2
248
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
of the data. In this second training pass, we train for the same number of steps as
the early stopping procedure determined was optimal in the first pass. There are
some subtleties associated with this procedure. For example, there is not a good
way of knowing whether to retrain for the same number of parameter updates or
the same number of passes through the dataset. On the second round of training,
each pass through the dataset will require more parameter updates because the
training set is bigger.
Algorithm 7.2 A meta-algorithm for using early stopping to determine how long
to train, then retraining on all the data.
Let X( )
train and y( )
train be the training set.
Split X( )
train and y( )
train into (X( )
subtrain , X (valid)) (
and y( )
subtrain , y(valid)
)
respectively.
Run early stopping (algorithm ) starting from random
7.1 θ using X( )
subtrain
and
y( )
subtrain
for training data and X(valid)
and y(valid)
for validation data. This
returns i∗
, the optimal number of steps.
Set to random values again.
θ
Train on X( )
train and y( )
train for i∗
steps.
Another strategy for using all of the data is to keep the parameters obtained
from the first round of training and then continue training but now using all of
the data. At this stage, we now no longer have a guide for when to stop in terms
of a number of steps. Instead, we can monitor the average loss function on the
validation set, and continue training until it falls below the value of the training
set objective at which the early stopping procedure halted. This strategy avoids
the high cost of retraining the model from scratch, but is not as well-behaved. For
example, there is not any guarantee that the objective on the validation set will
ever reach the target value, so this strategy is not even guaranteed to terminate.
This procedure is presented more formally in algorithm .
7.3
Early stopping is also useful because it reduces the computational cost of the
training procedure. Besides the obvious reduction in cost due to limiting the number
of training iterations, it also has the benefit of providing regularization without
requiring the addition of penalty terms to the cost function or the computation of
the gradients of such additional terms.
How early stopping acts as a regularizer: So far we have stated that early
stopping a regularization strategy, but we have supported this claim only by
is
showing learning curves where the validation set error has a U-shaped curve. What
249
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
Algorithm 7.3 Meta-algorithm using early stopping to determine at what objec-
tive value we start to overfit, then continue training until that value is reached.
Let X( )
train and y( )
train be the training set.
Split X( )
train and y( )
train into (X( )
subtrain , X (valid)) (
and y( )
subtrain , y(valid))
respectively.
Run early stopping (algorithm ) starting from random
7.1 θ using X( )
subtrain and
y( )
subtrain for training data and X(valid) and y(valid) for validation data. This
updates .
θ
 J ,
← (θ X( )
subtrain , y( )
subtrain )
while J ,
(θ X(valid), y(valid)) >  do
Train on X( )
train and y( )
train for steps.
n
end while
is the actual mechanism by which early stopping regularizes the model? Bishop
( ) and ( ) argued that early stopping has the effect of
1995a Sjöberg and Ljung 1995
restricting the optimization procedure to a relatively small volume of parameter
space in the neighborhood of the initial parameter value θo, as illustrated in
figure . More specifically, imagine taking
7.4 τ optimization steps (corresponding
to τ training iterations) and with learning rate . We can view the product τ
as a measure of effective capacity. Assuming the gradient is bounded, restricting
both the number of iterations and the learning rate limits the volume of parameter
space reachable from θo. In this sense, τ behaves as if it were the reciprocal of
the coefficient used for weight decay.
Indeed, we can show how—in the case of a simple linear model with a quadratic
error function and simple gradient descent—early stopping is equivalent to L2
regularization.
In order to compare with classical L2
regularization, we examine a simple
setting where the only parameters are linear weights (θ = w). We can model
the cost function J with a quadratic approximation in the neighborhood of the
empirically optimal value of the weights w∗:
ˆ
J J
( ) =
θ (w∗
) +
1
2
(w w
− ∗
)
H w w
( − ∗
), (7.33)
where H is the Hessian matrix of J with respect to w evaluated at w∗
. Given the
assumption that w∗
is a minimum of J(w), we know that H is positive semidefinite.
Under a local Taylor series approximation, the gradient is given by:
∇w ˆ
J( ) = (
w H w w
− ∗
). (7.34)
250
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
w1
w
2
w∗
w̃
w1
w
2
w∗
w̃
Figure 7.4: An illustration of the effect of early stopping. (Left)The solid contour lines
indicate the contours of the negative log-likelihood. The dashed line indicates the trajectory
taken by SGD beginning from the origin. Rather than stopping at the point w∗ that
minimizes the cost, early stopping results in the trajectory stopping at an earlier pointw̃.
(Right)An illustration of the effect of L2 regularization for comparison. The dashed circles
indicate the contours of the L2 penalty, which causes the minimum of the total cost to lie
nearer the origin than the minimum of the unregularized cost.
We are going to study the trajectory followed by the parameter vector during
training. For simplicity, let us set the initial parameter vector to the origin,3
that
is w(0) = 0. Let us study the approximate behavior of gradient descent on J by
analyzing gradient descent on ˆ
J:
w( )
τ
= w( 1)
τ−
− ∇
 w
ˆ
J(w( 1)
τ−
) (7.35)
= w( 1)
τ− − H w
( ( 1)
τ− − w ∗
) (7.36)
w( )
τ
− w∗
= ( )(
I H
−  w( 1)
τ−
− w∗
). (7.37)
Let us now rewrite this expression in the space of the eigenvectors ofH , exploiting
the eigendecomposition of H: H = Q Q
Λ , where Λ is a diagonal matrix and Q
is an orthonormal basis of eigenvectors.
w( )
τ
− w∗
= (I Q Q
−  Λ 
)(w( 1)
τ−
− w∗
) (7.38)
Q
(w( )
τ
− w∗
) = ( )
I − Λ Q
(w( 1)
τ−
− w∗
) (7.39)
3
For neural networks, to obtain symmetry breaking between hidden units, we cannot initialize
all the parameters to 0, as discussed in section . However, the argument holds for any other
6.2
initial value w(0)
.
251
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
Assuming that w(0)
= 0 and that  is chosen to be small enough to guarantee
|1− λi| < 1, the parameter trajectory during training after τ parameter updates
is as follows:
Q
w( )
τ
= [ ( )
I − I − Λ τ
]Q
w∗
. (7.40)
Now, the expression for Q
w̃ in equation for
7.13 L2
regularization can be rear-
ranged as:
Q
w̃ I
= ( +
Λ α )−1
ΛQ 
w∗
(7.41)
Q
w̃ I I
= [ − ( +
Λ α )−1
α]Q
w∗
(7.42)
Comparing equation and equation , we see that if the hyperparameters
7.40 7.42 ,
α τ
, and are chosen such that
( )
I − Λ τ
= ( + )
Λ αI −1
α, (7.43)
then L2 regularization and early stopping can be seen to be equivalent (at least
under the quadratic approximation of the objective function). Going even further,
by taking logarithms and using the series expansion for log(1 +x), we can conclude
that if all λi are small (that is, λi  1 and λi/α  1) then
τ ≈
1
α
, (7.44)
α ≈
1
τ
. (7.45)
That is, under these assumptions, the number of training iterations τ plays a role
inversely proportional to the L2
regularization parameter, and the inverse of τ
plays the role of the weight decay coefficient.
Parameter values corresponding to directions of significant curvature (of the
objective function) are regularized less than directions of less curvature. Of course,
in the context of early stopping, this really means that parameters that correspond
to directions of significant curvature tend to learn early relative to parameters
corresponding to directions of less curvature.
The derivations in this section have shown that a trajectory of length τ ends
at a point that corresponds to a minimum of the L2-regularized objective. Early
stopping is of course more than the mere restriction of the trajectory length;
instead, early stopping typically involves monitoring the validation set error in
order to stop the trajectory at a particularly good point in space. Early stopping
therefore has the advantage over weight decay that early stopping automatically
determines the correct amount of regularization while weight decay requires many
training experiments with different values of its hyperparameter.
252
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
7.9 Parameter Tying and Parameter Sharing
Thus far, in this chapter, when we have discussed adding constraints or penalties
to the parameters, we have always done so with respect to a fixed region or point.
For example, L2
regularization (or weight decay) penalizes model parameters for
deviating from the fixed value of zero. However, sometimes we may need other
ways to express our prior knowledge about suitable values of the model parameters.
Sometimes we might not know precisely what values the parameters should take
but we know, from knowledge of the domain and model architecture, that there
should be some dependencies between the model parameters.
A common type of dependency that we often want to express is that certain
parameters should be close to one another. Consider the following scenario: we
have two models performing the same classification task (with the same set of
classes) but with somewhat different input distributions. Formally, we have model
A with parameters w( )
A and model B with parameters w( )
B . The two models
map the input to two different, but related outputs: ŷ( )
A = f(w( )
A , x) and
ŷ( )
B = (
g w( )
B , x).
Let us imagine that the tasks are similar enough (perhaps with similar input
and output distributions) that we believe the model parameters should be close
to each other: ∀i, w( )
A
i should be close to w( )
B
i . We can leverage this information
through regularization. Specifically, we can use a parameter norm penalty of the
form: Ω(w( )
A , w( )
B ) = w( )
A − w( )
B 2
2. Here we used an L2 penalty, but other
choices are also possible.
This kind of approach was proposed by ( ), who regularized
Lasserre et al. 2006
the parameters of one model, trained as a classifier in a supervised paradigm, to
be close to the parameters of another model, trained in an unsupervised paradigm
(to capture the distribution of the observed input data). The architectures were
constructed such that many of the parameters in the classifier model could be
paired to corresponding parameters in the unsupervised model.
While a parameter norm penalty is one way to regularize parameters to be
close to one another, the more popular way is to use constraints: to force sets
of parameters to be equal. This method of regularization is often referred to as
parameter sharing, because we interpret the various models or model components
as sharing a unique set of parameters. A significant advantage of parameter sharing
over regularizing the parameters to be close (via a norm penalty) is that only a
subset of the parameters (the unique set) need to be stored in memory. In certain
models—such as the convolutional neural network—this can lead to significant
reduction in the memory footprint of the model.
253
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
Convolutional Neural Networks By far the most popular and extensive use
of parameter sharing occurs in convolutional neural networks (CNNs) applied
to computer vision.
Natural images have many statistical properties that are invariant to translation.
For example, a photo of a cat remains a photo of a cat if it is translated one pixel
to the right. CNNs take this property into account by sharing parameters across
multiple image locations. The same feature (a hidden unit with the same weights)
is computed over different locations in the input. This means that we can find a
cat with the same cat detector whether the cat appears at column i or column
i + 1 in the image.
Parameter sharing has allowed CNNs to dramatically lower the number of unique
model parameters and to significantly increase network sizes without requiring a
corresponding increase in training data. It remains one of the best examples of
how to effectively incorporate domain knowledge into the network architecture.
CNNs will be discussed in more detail in chapter .
9
7.10 Sparse Representations
Weight decay acts by placing a penalty directly on the model parameters. Another
strategy is to place a penalty on the activations of the units in a neural network,
encouraging their activations to be sparse. This indirectly imposes a complicated
penalty on the model parameters.
We have already discussed (in section ) how
7.1.2 L1 penalization induces
a sparse parametrization—meaning that many of the parameters become zero
(or close to zero). Representational sparsity, on the other hand, describes a
representation where many of the elements of the representation are zero (or close
to zero). A simplified view of this distinction can be illustrated in the context of
linear regression:






18
5
15
−9
−3






=






4 0 0 2 0 0
−
0 0 1 0 3 0
−
0 5 0 0 0 0
1 0 0 1 0 4
− −
1 0 0 0 5 0
−














2
3
−2
−5
1
4








y ∈ Rm A ∈ Rm n
× x ∈ Rn
(7.46)
254
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING






−14
1
19
2
23






=






3 1 2 5 4 1
− −
4 2 3 1 1 3
− −
− − −
1 5 4 2 3 2
3 1 2 3 0 3
− −
− − − −
5 4 2 2 5 1














0
2
0
0
−3
0








y ∈ Rm B ∈ Rm n
× h ∈ Rn
(7.47)
In the first expression, we have an example of a sparsely parametrized linear
regression model. In the second, we have linear regression with a sparse representa-
tion h of the data x. That is, h is a function of x that, in some sense, represents
the information present in , but does so with a sparse vector.
x
Representational regularization is accomplished by the same sorts of mechanisms
that we have used in parameter regularization.
Norm penalty regularization of representations is performed by adding to the
loss function J a norm penalty on the representation. This penalty is denoted
Ω( )
h . As before, we denote the regularized loss function by ˜
J:
˜
J , J , α
( ;
θ X y) = ( ;
θ X y) + Ω( )
h (7.48)
where α ∈ [0, ∞) weights the relative contribution of the norm penalty term, with
larger values of corresponding to more regularization.
α
Just as an L1
penalty on the parameters induces parameter sparsity, an L1
penalty on the elements of the representation induces representational sparsity:
Ω(h) = || ||
h 1 =

i |hi|. Of course, the L1 penalty is only one choice of penalty
that can result in a sparse representation. Others include the penalty derived from
a Student-t prior on the representation ( , ; , )
Olshausen and Field 1996 Bergstra 2011
and KL divergence penalties ( , ) that are especially
Larochelle and Bengio 2008
useful for representations with elements constrained to lie on the unit interval.
Lee 2008 Goodfellow 2009
et al. ( ) and et al. ( ) both provide examples of strategies
based on regularizing the average activation across several examples, 1
m

i h( )
i , to
be near some target value, such as a vector with .01 for each entry.
Other approaches obtain representational sparsity with a hard constraint on
the activation values. For example, orthogonal matching pursuit (Pati et al.,
1993) encodes an input x with the representation h that solves the constrained
optimization problem
arg min
h h
, 0<k
 − 
x W h 2
, (7.49)
where  
h 0 is the number of non-zero entries of h . This problem can be solved
efficiently when W is constrained to be orthogonal. This method is often called
255
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
OMP-k with the value of k specified to indicate the number of non-zero features
allowed. ( ) demonstrated that OMP- can be a very effective
Coates and Ng 2011 1
feature extractor for deep architectures.
Essentially any model that has hidden units can be made sparse. Throughout
this book, we will see many examples of sparsity regularization used in a variety of
contexts.
7.11 Bagging and Other Ensemble Methods
Bagging (short for bootstrap aggregating) is a technique for reducing gen-
eralization error by combining several models ( , ). The idea is to
Breiman 1994
train several different models separately, then have all of the models vote on the
output for test examples. This is an example of a general strategy in machine
learning called model averaging. Techniques employing this strategy are known
as ensemble methods.
The reason that model averaging works is that different models will usually
not make all the same errors on the test set.
Consider for example a set of k regression models. Suppose that each model
makes an error i on each example, with the errors drawn from a zero-mean
multivariate normal distribution with variances E[2
i] = v and covariances E[ij] =
c. Then the error made by the average prediction of all the ensemble models is
1
k

i i. The expected squared error of the ensemble predictor is
E



1
k

i
i
2

 =
1
k2 E



i

2
i +

j i
=
ij



 (7.50)
=
1
k
v +
k − 1
k
c. (7.51)
In the case where the errors are perfectly correlated and c = v, the mean squared
error reduces to v, so the model averaging does not help at all. In the case where
the errors are perfectly uncorrelated and c = 0, the expected squared error of the
ensemble is only 1
k v. This means that the expected squared error of the ensemble
decreases linearly with the ensemble size. In other words, on average, the ensemble
will perform at least as well as any of its members, and if the members make
independent errors, the ensemble will perform significantly better than its members.
Different ensemble methods construct the ensemble of models in different ways.
For example, each member of the ensemble could be formed by training a completely
256
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
8
8
First ensemble member
Second ensemble member
Original dataset
First resampled dataset
Second resampled dataset
Figure 7.5: A cartoon depiction of how bagging works. Suppose we train an 8 detector on
the dataset depicted above, containing an 8, a 6 and a 9. Suppose we make two different
resampled datasets. The bagging training procedure is to construct each of these datasets
by sampling with replacement. The first dataset omits the 9 and repeats the 8. On this
dataset, the detector learns that a loop on top of the digit corresponds to an 8. On
the second dataset, we repeat the 9 and omit the 6. In this case, the detector learns
that a loop on the bottom of the digit corresponds to an 8. Each of these individual
classification rules is brittle, but if we average their output then the detector is robust,
achieving maximal confidence only when both loops of the 8 are present.
different kind of model using a different algorithm or objective function. Bagging
is a method that allows the same kind of model, training algorithm and objective
function to be reused several times.
Specifically, bagging involves constructing k different datasets. Each dataset
has the same number of examples as the original dataset, but each dataset is
constructed by sampling with replacement from the original dataset. This means
that, with high probability, each dataset is missing some of the examples from the
original dataset and also contains several duplicate examples (on average around
2/3 of the examples from the original dataset are found in the resulting training
set, if it has the same size as the original). Model i is then trained on dataset
i. The differences between which examples are included in each dataset result in
differences between the trained models. See figure for an example.
7.5
Neural networks reach a wide enough variety of solution points that they can
often benefit from model averaging even if all of the models are trained on the same
dataset. Differences in random initialization, random selection of minibatches,
differences in hyperparameters, or different outcomes of non-deterministic imple-
mentations of neural networks are often enough to cause different members of the
257
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
ensemble to make partially independent errors.
Model averaging is an extremely powerful and reliable method for reducing
generalization error. Its use is usually discouraged when benchmarking algorithms
for scientific papers, because any machine learning algorithm can benefit substan-
tially from model averaging at the price of increased computation and memory.
For this reason, benchmark comparisons are usually made using a single model.
Machine learning contests are usually won by methods using model averag-
ing over dozens of models. A recent prominent example is the Netflix Grand
Prize (Koren 2009
, ).
Not all techniques for constructing ensembles are designed to make the ensemble
more regularized than the individual models. For example, a technique called
boosting (Freund and Schapire 1996b a
, , ) constructs an ensemble with higher
capacity than the individual models. Boosting has been applied to build ensembles
of neural networks (Schwenk and Bengio 1998
, ) by incrementally adding neural
networks to the ensemble. Boosting has also been applied interpreting an individual
neural network as an ensemble ( , ), incrementally adding hidden
Bengio et al. 2006a
units to the neural network.
7.12 Dropout
Dropout (Srivastava 2014
et al., ) provides a computationally inexpensive but
powerful method of regularizing a broad family of models. To a first approximation,
dropout can be thought of as a method of making bagging practical for ensembles
of very many large neural networks. Bagging involves training multiple models,
and evaluating multiple models on each test example. This seems impractical
when each model is a large neural network, since training and evaluating such
networks is costly in terms of runtime and memory. It is common to use ensembles
of five to ten neural networks— ( ) used six to win the ILSVRC—
Szegedy et al. 2014a
but more than this rapidly becomes unwieldy. Dropout provides an inexpensive
approximation to training and evaluating a bagged ensemble of exponentially many
neural networks.
Specifically, dropout trains the ensemble consisting of all sub-networks that
can be formed by removing non-output units from an underlying base network,
as illustrated in figure . In most modern neural networks, based on a series of
7.6
affine transformations and nonlinearities, we can effectively remove a unit from a
network by multiplying its output value by zero. This procedure requires some
slight modification for models such as radial basis function networks, which take
258
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
the difference between the unit’s state and some reference value. Here, we present
the dropout algorithm in terms of multiplication by zero for simplicity, but it can
be trivially modified to work with other operations that remove a unit from the
network.
Recall that to learn with bagging, we define k different models, construct k
different datasets by sampling from the training set with replacement, and then
train model i on dataset i. Dropout aims to approximate this process, but with an
exponentially large number of neural networks. Specifically, to train with dropout,
we use a minibatch-based learning algorithm that makes small steps, such as
stochastic gradient descent. Each time we load an example into a minibatch, we
randomly sample a different binary mask to apply to all of the input and hidden
units in the network. The mask for each unit is sampled independently from all of
the others. The probability of sampling a mask value of one (causing a unit to be
included) is a hyperparameter fixed before training begins. It is not a function
of the current value of the model parameters or the input example. Typically,
an input unit is included with probability 0.8 and a hidden unit is included with
probability 0.5. We then run forward propagation, back-propagation, and the
learning update as usual. Figure illustrates how to run forward propagation
7.7
with dropout.
More formally, suppose that a mask vector µ specifies which units to include,
and J (θ µ
, ) defines the cost of the model defined by parameters θ and mask µ.
Then dropout training consists in minimizing EµJ(θ µ
, ). The expectation contains
exponentially many terms but we can obtain an unbiased estimate of its gradient
by sampling values of .
µ
Dropout training is not quite the same as bagging training. In the case of
bagging, the models are all independent. In the case of dropout, the models share
parameters, with each model inheriting a different subset of parameters from the
parent neural network. This parameter sharing makes it possible to represent an
exponential number of models with a tractable amount of memory. In the case of
bagging, each model is trained to convergence on its respective training set. In the
case of dropout, typically most models are not explicitly trained at all—usually,
the model is large enough that it would be infeasible to sample all possible sub-
networks within the lifetime of the universe. Instead, a tiny fraction of the possible
sub-networks are each trained for a single step, and the parameter sharing causes
the remaining sub-networks to arrive at good settings of the parameters. These
are the only differences. Beyond these, dropout follows the bagging algorithm. For
example, the training set encountered by each sub-network is indeed a subset of
the original training set sampled with replacement.
259
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
y
y
h1
h1 h2
h2
x1
x1 x2
x2
y
y
h1
h1 h2
h2
x1
x1 x2
x2
y
y
h1
h1 h2
h2
x2
x2
y
y
h1
h1 h2
h2
x1
x1
y
y
h2
h2
x1
x1 x2
x2
y
y
h1
h1
x1
x1 x2
x2
y
y
h1
h1 h2
h2
y
y
x1
x1 x2
x2
y
y
h2
h2
x2
x2
y
y
h1
h1
x1
x1
y
y
h1
h1
x2
x2
y
y
h2
h2
x1
x1
y
y
x1
x1
y
y
x2
x2
y
y
h2
h2
y
y
h1
h1
y
y
Base network
Ensemble of subnetworks
Figure 7.6: Dropout trains an ensemble consisting of all sub-networks that can be
constructed by removing non-output units from an underlying base network. Here, we
begin with a base network with two visible units and two hidden units. There are sixteen
possible subsets of these four units. We show all sixteen subnetworks that may be formed
by dropping out different subsets of units from the original network. In this small example,
a large proportion of the resulting networks have no input units or no path connecting
the input to the output. This problem becomes insignificant for networks with wider
layers, where the probability of dropping all possible paths from inputs to outputs becomes
smaller.
260
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
ˆ
x1
ˆ
x1
µx1
µx1
x1
x1
ˆ
x2
ˆ
x2
x2
x2 µx2
µx2
h1
h1 h2
h2
µh1
µh1
µh2
µh2
ˆ
h1
ˆ
h1
ˆ
h2
ˆ
h2
y
y
y
y
h1
h1 h2
h2
x1
x1 x2
x2
Figure 7.7: An example of forward propagation through a feedforward network using
dropout. (Top)In this example, we use a feedforward network with two input units, one
hidden layer with two hidden units, and one output unit. To perform forward
(Bottom)
propagation with dropout, we randomly sample a vector µ with one entry for each input
or hidden unit in the network. The entries of µ are binary and are sampled independently
from each other. The probability of each entry being is a hyperparameter, usually
1 0
.5
for the hidden layers and 0.8 for the input. Each unit in the network is multiplied by
the corresponding mask, and then forward propagation continues through the rest of the
network as usual. This is equivalent to randomly selecting one of the sub-networks from
figure and running forward propagation through it.
7.6
261
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
To make a prediction, a bagged ensemble must accumulate votes from all of
its members. We refer to this process as inference in this context. So far, our
description of bagging and dropout has not required that the model be explicitly
probabilistic. Now, we assume that the model’s role is to output a probability
distribution. In the case of bagging, each model iproduces a probability distribution
p( )
i (y | x). The prediction of the ensemble is given by the arithmetic mean of all
of these distributions,
1
k
k

i=1
p( )
i
( )
y | x . (7.52)
In the case of dropout, each sub-model defined by mask vector µ defines a prob-
ability distribution p(y ,
| x µ). The arithmetic mean over all masks is given
by 
µ
p p y ,
( )
µ ( | x µ) (7.53)
where p(µ) is the probability distribution that was used to sample µ at training
time.
Because this sum includes an exponential number of terms, it is intractable
to evaluate except in cases where the structure of the model permits some form
of simplification. So far, deep neural nets are not known to permit any tractable
simplification. Instead, we can approximate the inference with sampling, by
averaging together the output from many masks. Even 10-20 masks are often
sufficient to obtain good performance.
However, there is an even better approach, that allows us to obtain a good
approximation to the predictions of the entire ensemble, at the cost of only one
forward propagation. To do so, we change to using the geometric mean rather than
the arithmetic mean of the ensemble members’ predicted distributions. Warde-
Farley 2014
et al. ( ) present arguments and empirical evidence that the geometric
mean performs comparably to the arithmetic mean in this context.
The geometric mean of multiple probability distributions is not guaranteed to be
a probability distribution. To guarantee that the result is a probability distribution,
we impose the requirement that none of the sub-models assigns probability 0 to any
event, and we renormalize the resulting distribution. The unnormalized probability
distribution defined directly by the geometric mean is given by
p̃ensemble( ) =
y | x 2
d

µ
p y ,
( | x µ) (7.54)
where d is the number of units that may be dropped. Here we use a uniform
distribution over µ to simplify the presentation, but non-uniform distributions are
262
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
also possible. To make predictions we must re-normalize the ensemble:
pensemble( ) =
y | x
p̃ensemble( )
y | x

y p̃ensemble(y | x)
. (7.55)
A key insight ( , ) involved in dropout is that we can approxi-
Hinton et al. 2012c
mate pensemble by evaluating p(y | x) in one model: the model with all units, but
with the weights going out of unit i multiplied by the probability of including unit
i. The motivation for this modification is to capture the right expected value of the
output from that unit. We call this approach the weight scaling inference rule.
There is not yet any theoretical argument for the accuracy of this approximate
inference rule in deep nonlinear networks, but empirically it performs very well.
Because we usually use an inclusion probability of 1
2 , the weight scaling rule
usually amounts to dividing the weights by at the end of training, and then using
2
the model as usual. Another way to achieve the same result is to multiply the
states of the units by during training. Either way, the goal is to make sure that
2
the expected total input to a unit at test time is roughly the same as the expected
total input to that unit at train time, even though half the units at train time are
missing on average.
For many classes of models that do not have nonlinear hidden units, the weight
scaling inference rule is exact. For a simple example, consider a softmax regression
classifier with input variables represented by the vector :
n v
P y
( =
y | v) = softmax

W
v + b

y
. (7.56)
We can index into the family of sub-models by element-wise multiplication of the
input with a binary vector :
d
P y
( =
y | v; ) =
d softmax

W 
( ) +
d  v b

y
. (7.57)
The ensemble predictor is defined by re-normalizing the geometric mean over all
ensemble members’ predictions:
Pensemble( = ) =
y y | v
P̃ensemble( = )
y y | v

y P̃ensemble( =
y y | v)
(7.58)
where
P̃ensemble( = ) =
y y | v 2n
 
d∈{ }
0 1
, n
P y .
( =
y | v; )
d (7.59)
263
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
To see that the weight scaling rule is exact, we can simplify ˜
Pensemble:
P̃ensemble( = ) =
y y | v 2
n
 
d∈{ }
0 1
, n
P y
( =
y | v; )
d (7.60)
= 2n
 
d∈{ }
0 1
, n
softmax (W ( ) + )
d  v b y (7.61)
= 2
n





d∈{ }
0 1
, n
exp

W 
y,:( ) +
d  v by


y exp

W 
y,:( ) +
d  v by
 (7.62)
=
2
n

d∈{ }
0 1
, n exp

W 
y,:( ) +
d  v by

2
n


d∈{ }
0 1
, n

y exp

W
y,: ( ) +
d  v by
 (7.63)
Because P̃ will be normalized, we can safely ignore multiplication by factors that
are constant with respect to :
y
˜
Pensemble( = )
y y | v ∝ 2
n
 
d∈{ }
0 1
, n
exp

W 
y,:( ) +
d  v by

(7.64)
= exp

 1
2n

d∈{ }
0 1
, n
W
y,:( ) +
d  v by

 (7.65)
= exp

1
2
W 
y,:v + by

. (7.66)
Substituting this back into equation we obtain a softmax classifier with weights
7.58
1
2W.
The weight scaling rule is also exact in other settings, including regression
networks with conditionally normal outputs, and deep networks that have hidden
layers without nonlinearities. However, the weight scaling rule is only an approxi-
mation for deep models that have nonlinearities. Though the approximation has
not been theoretically characterized, it often works well, empirically. Goodfellow
et al. ( ) found experimentally that the weight scaling approximation can work
2013a
better (in terms of classification accuracy) than Monte Carlo approximations to the
ensemble predictor. This held true even when the Monte Carlo approximation was
allowed to sample up to 1,000 sub-networks. ( ) found
Gal and Ghahramani 2015
that some models obtain better classification accuracy using twenty samples and
264
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
the Monte Carlo approximation. It appears that the optimal choice of inference
approximation is problem-dependent.
Srivastava 2014
et al. ( ) showed that dropout is more effective than other
standard computationally inexpensive regularizers, such as weight decay, filter
norm constraints and sparse activity regularization. Dropout may also be combined
with other forms of regularization to yield a further improvement.
One advantage of dropout is that it is very computationally cheap. Using
dropout during training requires only O(n) computation per example per update,
to generate n random binary numbers and multiply them by the state. Depending
on the implementation, it may also require O(n) memory to store these binary
numbers until the back-propagation stage. Running inference in the trained model
has the same cost per-example as if dropout were not used, though we must pay
the cost of dividing the weights by 2 once before beginning to run inference on
examples.
Another significant advantage of dropout is that it does not significantly limit
the type of model or training procedure that can be used. It works well with nearly
any model that uses a distributed representation and can be trained with stochastic
gradient descent. This includes feedforward neural networks, probabilistic models
such as restricted Boltzmann machines (Srivastava 2014
et al., ), and recurrent
neural networks (Bayer and Osendorfer 2014 Pascanu 2014a
, ; et al., ). Many other
regularization strategies of comparable power impose more severe restrictions on
the architecture of the model.
Though the cost per-step of applying dropout to a specific model is negligible,
the cost of using dropout in a complete system can be significant. Because dropout
is a regularization technique, it reduces the effective capacity of a model. To offset
this effect, we must increase the size of the model. Typically the optimal validation
set error is much lower when using dropout, but this comes at the cost of a much
larger model and many more iterations of the training algorithm. For very large
datasets, regularization confers little reduction in generalization error. In these
cases, the computational cost of using dropout and larger models may outweigh
the benefit of regularization.
When extremely few labeled training examples are available, dropout is less
effective. Bayesian neural networks ( , ) outperform dropout on the
Neal 1996
Alternative Splicing Dataset ( , ) where fewer than 5,000 examples
Xiong et al. 2011
are available (Srivastava 2014
et al., ). When additional unlabeled data is available,
unsupervised feature learning can gain an advantage over dropout.
Wager 2013
et al. ( ) showed that, when applied to linear regression, dropout
is equivalent to L2 weight decay, with a different weight decay coefficient for
265
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
each input feature. The magnitude of each feature’s weight decay coefficient is
determined by its variance. Similar results hold for other linear models. For deep
models, dropout is not equivalent to weight decay.
The stochasticity used while training with dropout is not necessary for the
approach’s success. It is just a means of approximating the sum over all sub-
models. Wang and Manning 2013
( ) derived analytical approximations to this
marginalization. Their approximation, known as fast dropout resulted in faster
convergence time due to the reduced stochasticity in the computation of the
gradient. This method can also be applied at test time, as a more principled
(but also more computationally expensive) approximation to the average over all
sub-networks than the weight scaling approximation. Fast dropout has been used
to nearly match the performance of standard dropout on small neural network
problems, but has not yet yielded a significant improvement or been applied to a
large problem.
Just as stochasticity is not necessary to achieve the regularizing effect of
dropout, it is also not sufficient. To demonstrate this, Warde-Farley 2014
et al. ( )
designed control experiments using a method called dropout boosting that they
designed to use exactly the same mask noise as traditional dropout but lack
its regularizing effect. Dropout boosting trains the entire ensemble to jointly
maximize the log-likelihood on the training set. In the same sense that traditional
dropout is analogous to bagging, this approach is analogous to boosting. As
intended, experiments with dropout boosting show almost no regularization effect
compared to training the entire network as a single model. This demonstrates that
the interpretation of dropout as bagging has value beyond the interpretation of
dropout as robustness to noise. The regularization effect of the bagged ensemble is
only achieved when the stochastically sampled ensemble members are trained to
perform well independently of each other.
Dropout has inspired other stochastic approaches to training exponentially
large ensembles of models that share weights. DropConnect is a special case of
dropout where each product between a single scalar weight and a single hidden
unit state is considered a unit that can be dropped (Wan 2013
et al., ). Stochastic
pooling is a form of randomized pooling (see section ) for building ensembles
9.3
of convolutional networks with each convolutional network attending to different
spatial locations of each feature map. So far, dropout remains the most widely
used implicit ensemble method.
One of the key insights of dropout is that training a network with stochastic
behavior and making predictions by averaging over multiple stochastic decisions
implements a form of bagging with parameter sharing. Earlier, we described
266
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
dropout as bagging an ensemble of models formed by including or excluding
units. However, there is no need for this model averaging strategy to be based on
inclusion and exclusion. In principle, any kind of random modification is admissible.
In practice, we must choose modification families that neural networks are able
to learn to resist. Ideally, we should also use model families that allow a fast
approximate inference rule. We can think of any form of modification parametrized
by a vector µ as training an ensemble consisting of p(y ,
| x µ) for all possible
values of µ. There is no requirement that µ have a finite number of values. For
example, µ can be real-valued. Srivastava 2014
et al. ( ) showed that multiplying the
weights by µ ∼ N (1, I) can outperform dropout based on binary masks. Because
E[µ] = 1 the standard network automatically implements approximate inference
in the ensemble, without needing any weight scaling.
So far we have described dropout purely as a means of performing efficient,
approximate bagging. However, there is another view of dropout that goes further
than this. Dropout trains not just a bagged ensemble of models, but an ensemble
of models that share hidden units. This means each hidden unit must be able to
perform well regardless of which other hidden units are in the model. Hidden units
must be prepared to be swapped and interchanged between models. Hinton et al.
( ) were inspired by an idea from biology: sexual reproduction, which involves
2012c
swapping genes between two different organisms, creates evolutionary pressure for
genes to become not just good, but to become readily swapped between different
organisms. Such genes and such features are very robust to changes in their
environment because they are not able to incorrectly adapt to unusual features
of any one organism or model. Dropout thus regularizes each hidden unit to be
not merely a good feature but a feature that is good in many contexts. Warde-
Farley 2014
et al. ( ) compared dropout training to training of large ensembles and
concluded that dropout offers additional improvements to generalization error
beyond those obtained by ensembles of independent models.
It is important to understand that a large portion of the power of dropout
arises from the fact that the masking noise is applied to the hidden units. This
can be seen as a form of highly intelligent, adaptive destruction of the information
content of the input rather than destruction of the raw values of the input. For
example, if the model learns a hidden unit hi that detects a face by finding the nose,
then dropping hi corresponds to erasing the information that there is a nose in
the image. The model must learn another hi, either that redundantly encodes the
presence of a nose, or that detects the face by another feature, such as the mouth.
Traditional noise injection techniques that add unstructured noise at the input are
not able to randomly erase the information about a nose from an image of a face
unless the magnitude of the noise is so great that nearly all of the information in
267
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
the image is removed. Destroying extracted features rather than original values
allows the destruction process to make use of all of the knowledge about the input
distribution that the model has acquired so far.
Another important aspect of dropout is that the noise is multiplicative. If the
noise were additive with fixed scale, then a rectified linear hidden unit hi with
added noise  could simply learn to have hi become very large in order to make
the added noise  insignificant by comparison. Multiplicative noise does not allow
such a pathological solution to the noise robustness problem.
Another deep learning algorithm, batch normalization, reparametrizes the model
in a way that introduces both additive and multiplicative noise on the hidden
units at training time. The primary purpose of batch normalization is to improve
optimization, but the noise can have a regularizing effect, and sometimes makes
dropout unnecessary. Batch normalization is described further in section .
8.7.1
7.13 Adversarial Training
In many cases, neural networks have begun to reach human performance when
evaluated on an i.i.d. test set. It is natural therefore to wonder whether these
models have obtained a true human-level understanding of these tasks. In order
to probe the level of understanding a network has of the underlying task, we can
search for examples that the model misclassifies. ( ) found that
Szegedy et al. 2014b
even neural networks that perform at human level accuracy have a nearly 100%
error rate on examples that are intentionally constructed by using an optimization
procedure to search for an input x near a data point x such that the model
output is very different at x. In many cases, x can be so similar to x that a
human observer cannot tell the difference between the original example and the
adversarial example, but the network can make highly different predictions. See
figure for an example.
7.8
Adversarial examples have many implications, for example, in computer security,
that are beyond the scope of this chapter. However, they are interesting in the
context of regularization because one can reduce the error rate on the original i.i.d.
test set via adversarial training—training on adversarially perturbed examples
from the training set ( , ;
Szegedy et al. 2014b Goodfellow 2014b
et al., ).
Goodfellow 2014b
et al. ( ) showed that one of the primary causes of these
adversarial examples is excessive linearity. Neural networks are built out of
primarily linear building blocks. In some experiments the overall function they
implement proves to be highly linear as a result. These linear functions are easy
268
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
+ .007 × =
x sign(∇xJ(θ x
, , y))
x +
 sign(∇xJ(θ x
, , y))
y =“panda” “nematode” “gibbon”
w/ 57.7%
confidence
w/ 8.2%
confidence
w/ 99.3 %
confidence
Figure 7.8: A demonstration of adversarial example generation applied to GoogLeNet
( , ) on ImageNet. By adding an imperceptibly small vector whose
Szegedy et al. 2014a
elements are equal to the sign of the elements of the gradient of the cost function with
respect to the input, we can change GoogLeNet’s classification of the image. Reproduced
with permission from ( ).
Goodfellow et al. 2014b
to optimize. Unfortunately, the value of a linear function can change very rapidly
if it has numerous inputs. If we change each input by , then a linear function
with weights w can change by as much as || ||
w 1, which can be a very large
amount if w is high-dimensional. Adversarial training discourages this highly
sensitive locally linear behavior by encouraging the network to be locally constant
in the neighborhood of the training data. This can be seen as a way of explicitly
introducing a local constancy prior into supervised neural nets.
Adversarial training helps to illustrate the power of using a large function
family in combination with aggressive regularization. Purely linear models, like
logistic regression, are not able to resist adversarial examples because they are
forced to be linear. Neural networks are able to represent functions that can range
from nearly linear to nearly locally constant and thus have the flexibility to capture
linear trends in the training data while still learning to resist local perturbation.
Adversarial examples also provide a means of accomplishing semi-supervised
learning. At a point x that is not associated with a label in the dataset, the
model itself assigns some label ŷ. The model’s label ŷ may not be the true label,
but if the model is high quality, then ŷ has a high probability of providing the
true label. We can seek an adversarial example x that causes the classifier to
output a label y with y = ŷ. Adversarial examples generated using not the true
label but a label provided by a trained model are called virtual adversarial
examples (Miyato 2015
et al., ). The classifier may then be trained to assign the
same label to x and x. This encourages the classifier to learn a function that is
269
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
robust to small changes anywhere along the manifold where the unlabeled data
lies. The assumption motivating this approach is that different classes usually lie
on disconnected manifolds, and a small perturbation should not be able to jump
from one class manifold to another class manifold.
7.14 Tangent Distance, Tangent Prop, and Manifold
Tangent Classifier
Many machine learning algorithms aim to overcome the curse of dimensionality
by assuming that the data lies near a low-dimensional manifold, as described in
section .
5.11.3
One of the early attempts to take advantage of the manifold hypothesis is the
tangent distance algorithm ( , , ). It is a non-parametric
Simard et al. 1993 1998
nearest-neighbor algorithm in which the metric used is not the generic Euclidean
distance but one that is derived from knowledge of the manifolds near which
probability concentrates. It is assumed that we are trying to classify examples and
that examples on the same manifold share the same category. Since the classifier
should be invariant to the local factors of variation that correspond to movement
on the manifold, it would make sense to use as nearest-neighbor distance between
points x1 and x2 the distance between the manifolds M1 and M2 to which they
respectively belong. Although that may be computationally difficult (it would
require solving an optimization problem, to find the nearest pair of points on M1
and M2), a cheap alternative that makes sense locally is to approximate Mi by its
tangent plane at xi and measure the distance between the two tangents, or between
a tangent plane and a point. That can be achieved by solving a low-dimensional
linear system (in the dimension of the manifolds). Of course, this algorithm requires
one to specify the tangent vectors.
In a related spirit, the tangent prop algorithm ( , ) (figure )
Simard et al. 1992 7.9
trains a neural net classifier with an extra penalty to make each output f(x) of
the neural net locally invariant to known factors of variation. These factors of
variation correspond to movement along the manifold near which examples of the
same class concentrate. Local invariance is achieved by requiring ∇xf (x) to be
orthogonal to the known manifold tangent vectors v( )
i
at x, or equivalently that
the directional derivative of f at x in the directions v( )
i
be small by adding a
regularization penalty :
Ω
Ω( ) =
f

i

(∇xf( ))
x 
v( )
i
2
. (7.67)
270
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
This regularizer can of course be scaled by an appropriate hyperparameter, and, for
most neural networks, we would need to sum over many outputs rather than the lone
output f(x) described here for simplicity. As with the tangent distance algorithm,
the tangent vectors are derived a priori, usually from the formal knowledge of
the effect of transformations such as translation, rotation, and scaling in images.
Tangent prop has been used not just for supervised learning ( , )
Simard et al. 1992
but also in the context of reinforcement learning ( , ).
Thrun 1995
Tangent propagation is closely related to dataset augmentation. In both
cases, the user of the algorithm encodes his or her prior knowledge of the task
by specifying a set of transformations that should not alter the output of the
network. The difference is that in the case of dataset augmentation, the network is
explicitly trained to correctly classify distinct inputs that were created by applying
more than an infinitesimal amount of these transformations. Tangent propagation
does not require explicitly visiting a new input point. Instead, it analytically
regularizes the model to resist perturbation in the directions corresponding to
the specified transformation. While this analytical approach is intellectually
elegant, it has two major drawbacks. First, it only regularizes the model to resist
infinitesimal perturbation. Explicit dataset augmentation confers resistance to
larger perturbations. Second, the infinitesimal approach poses difficulties for models
based on rectified linear units. These models can only shrink their derivatives
by turning units off or shrinking their weights. They are not able to shrink their
derivatives by saturating at a high value with large weights, as sigmoid or tanh
units can. Dataset augmentation works well with rectified linear units because
different subsets of rectified units can activate for different transformed versions of
each original input.
Tangent propagation is also related to double backprop (Drucker and LeCun,
1992) and adversarial training ( , ; , ).
Szegedy et al. 2014b Goodfellow et al. 2014b
Double backprop regularizes the Jacobian to be small, while adversarial training
finds inputs near the original inputs and trains the model to produce the same
output on these as on the original inputs. Tangent propagation and dataset
augmentation using manually specified transformations both require that the
model should be invariant to certain specified directions of change in the input.
Double backprop and adversarial training both require that the model should be
invariant to directions of change in the input so long as the change is small. Just
all
as dataset augmentation is the non-infinitesimal version of tangent propagation,
adversarial training is the non-infinitesimal version of double backprop.
The manifold tangent classifier ( , ), eliminates the need to
Rifai et al. 2011c
know the tangent vectors a priori. As we will see in chapter , autoencoders can
14
271
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
x1
x
2
Normal Tangent
Figure 7.9: Illustration of the main idea of the tangent prop algorithm ( ,
Simard et al.
1992 Rifai 2011c
) and manifold tangent classifier ( et al., ), which both regularize the
classifier output function f(x). Each curve represents the manifold for a different class,
illustrated here as a one-dimensional manifold embedded in a two-dimensional space.
On one curve, we have chosen a single point and drawn a vector that is tangent to the
class manifold (parallel to and touching the manifold) and a vector that is normal to the
class manifold (orthogonal to the manifold). In multiple dimensions there may be many
tangent directions and many normal directions. We expect the classification function to
change rapidly as it moves in the direction normal to the manifold, and not to change as
it moves along the class manifold. Both tangent propagation and the manifold tangent
classifier regularize f(x) to not change very much asx moves along the manifold. Tangent
propagation requires the user to manually specify functions that compute the tangent
directions (such as specifying that small translations of images remain in the same class
manifold) while the manifold tangent classifier estimates the manifold tangent directions
by training an autoencoder to fit the training data. The use of autoencoders to estimate
manifolds will be described in chapter .
14
estimate the manifold tangent vectors. The manifold tangent classifier makes use
of this technique to avoid needing user-specified tangent vectors. As illustrated
in figure , these estimated tangent vectors go beyond the classical invariants
14.10
that arise out of the geometry of images (such as translation, rotation and scaling)
and include factors that must be learned because they are object-specific (such as
moving body parts). The algorithm proposed with the manifold tangent classifier
is therefore simple: (1) use an autoencoder to learn the manifold structure by
unsupervised learning, and (2) use these tangents to regularize a neural net classifier
as in tangent prop (equation ).
7.67
This chapter has described most of the general strategies used to regularize
neural networks. Regularization is a central theme of machine learning and as such
272
CHAPTER 7. REGULARIZATION FOR DEEP LEARNING
will be revisited periodically by most of the remaining chapters. Another central
theme of machine learning is optimization, described next.
273
Chapter 8
Optimization for Training Deep
Models
Deep learning algorithms involve optimization in many contexts. For example,
performing inference in models such as PCA involves solving an optimization
problem. We often use analytical optimization to write proofs or design algorithms.
Of all of the many optimization problems involved in deep learning, the most
difficult is neural network training. It is quite common to invest days to months of
time on hundreds of machines in order to solve even a single instance of the neural
network training problem. Because this problem is so important and so expensive,
a specialized set of optimization techniques have been developed for solving it.
This chapter presents these optimization techniques for neural network training.
If you are unfamiliar with the basic principles of gradient-based optimization,
we suggest reviewing chapter . That chapter includes a brief overview of numerical
4
optimization in general.
This chapter focuses on one particular case of optimization: finding the param-
eters θ of a neural network that significantly reduce a cost function J(θ), which
typically includes a performance measure evaluated on the entire training set as
well as additional regularization terms.
We begin with a description of how optimization used as a training algorithm
for a machine learning task differs from pure optimization. Next, we present several
of the concrete challenges that make optimization of neural networks difficult. We
then define several practical algorithms, including both optimization algorithms
themselves and strategies for initializing the parameters. More advanced algorithms
adapt their learning rates during training or leverage information contained in
274
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
the second derivatives of the cost function. Finally, we conclude with a review of
several optimization strategies that are formed by combining simple optimization
algorithms into higher-level procedures.
8.1 How Learning Differs from Pure Optimization
Optimization algorithms used for training of deep models differ from traditional
optimization algorithms in several ways. Machine learning usually acts indirectly.
In most machine learning scenarios, we care about some performance measure
P, that is defined with respect to the test set and may also be intractable. We
therefore optimize P only indirectly. We reduce a different cost function J(θ) in
the hope that doing so will improve P. This is in contrast to pure optimization,
where minimizing J is a goal in and of itself. Optimization algorithms for training
deep models also typically include some specialization on the specific structure of
machine learning objective functions.
Typically, the cost function can be written as an average over the training set,
such as
J( ) =
θ E( ) ˆ
x,y ∼pdata
L f , y ,
( ( ; )
x θ ) (8.1)
where L is the per-example loss function, f(x; θ) is the predicted output when
the input is x, p̂data is the empirical distribution. In the supervised learning case,
y is the target output. Throughout this chapter, we develop the unregularized
supervised case, where the arguments to L are f(x; θ) and y. However, it is trivial
to extend this development, for example, to include θ or x as arguments, or to
exclude y as arguments, in order to develop various forms of regularization or
unsupervised learning.
Equation defines an objective function with respect to the training set. We
8.1
would usually prefer to minimize the corresponding objective function where the
expectation is taken across the data generating distribution pdata rather than just
over the finite training set:
J∗
( ) =
θ E( )
x,y ∼pdata
L f , y .
( ( ; )
x θ ) (8.2)
8.1.1 Empirical Risk Minimization
The goal of a machine learning algorithm is to reduce the expected generalization
error given by equation . This quantity is known as the
8.2 risk. We emphasize here
that the expectation is taken over the true underlying distribution pdata. If we knew
the true distribution pdata(x, y), risk minimization would be an optimization task
275
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
solvable by an optimization algorithm. However, when we do not know pdata(x, y)
but only have a training set of samples, we have a machine learning problem.
The simplest way to convert a machine learning problem back into an op-
timization problem is to minimize the expected loss on the training set. This
means replacing the true distribution p(x, y) with the empirical distribution p̂(x, y)
defined by the training set. We now minimize the empirical risk
Ex,y∼p̂data( )
x,y [ ( ( ; ) )] =
L f x θ , y
1
m
m

i=1
L f
( (x( )
i
; )
θ , y( )
i
) (8.3)
where is the number of training examples.
m
The training process based on minimizing this average training error is known
as empirical risk minimization. In this setting, machine learning is still very
similar to straightforward optimization. Rather than optimizing the risk directly,
we optimize the empirical risk, and hope that the risk decreases significantly as
well. A variety of theoretical results establish conditions under which the true risk
can be expected to decrease by various amounts.
However, empirical risk minimization is prone to overfitting. Models with
high capacity can simply memorize the training set. In many cases, empirical
risk minimization is not really feasible. The most effective modern optimization
algorithms are based on gradient descent, but many useful loss functions, such
as 0-1 loss, have no useful derivatives (the derivative is either zero or undefined
everywhere). These two problems mean that, in the context of deep learning, we
rarely use empirical risk minimization. Instead, we must use a slightly different
approach, in which the quantity that we actually optimize is even more different
from the quantity that we truly want to optimize.
8.1.2 Surrogate Loss Functions and Early Stopping
Sometimes, the loss function we actually care about (say classification error) is not
one that can be optimized efficiently. For example, exactly minimizing expected 0-1
loss is typically intractable (exponential in the input dimension), even for a linear
classifier (Marcotte and Savard 1992
, ). In such situations, one typically optimizes
a surrogate loss function instead, which acts as a proxy but has advantages.
For example, the negative log-likelihood of the correct class is typically used as a
surrogate for the 0-1 loss. The negative log-likelihood allows the model to estimate
the conditional probability of the classes, given the input, and if the model can
do that well, then it can pick the classes that yield the least classification error in
expectation.
276
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
In some cases, a surrogate loss function actually results in being able to learn
more. For example, the test set 0-1 loss often continues to decrease for a long
time after the training set 0-1 loss has reached zero, when training using the
log-likelihood surrogate. This is because even when the expected 0-1 loss is zero,
one can improve the robustness of the classifier by further pushing the classes apart
from each other, obtaining a more confident and reliable classifier, thus extracting
more information from the training data than would have been possible by simply
minimizing the average 0-1 loss on the training set.
A very important difference between optimization in general and optimization
as we use it for training algorithms is that training algorithms do not usually halt
at a local minimum. Instead, a machine learning algorithm usually minimizes
a surrogate loss function but halts when a convergence criterion based on early
stopping (section ) is satisfied. Typically the early stopping criterion is based
7.8
on the true underlying loss function, such as 0-1 loss measured on a validation set,
and is designed to cause the algorithm to halt whenever overfitting begins to occur.
Training often halts while the surrogate loss function still has large derivatives,
which is very different from the pure optimization setting, where an optimization
algorithm is considered to have converged when the gradient becomes very small.
8.1.3 Batch and Minibatch Algorithms
One aspect of machine learning algorithms that separates them from general
optimization algorithms is that the objective function usually decomposes as a sum
over the training examples. Optimization algorithms for machine learning typically
compute each update to the parameters based on an expected value of the cost
function estimated using only a subset of the terms of the full cost function.
For example, maximum likelihood estimation problems, when viewed in log
space, decompose into a sum over each example:
θML = arg max
θ
m

i=1
log pmodel(x( )
i
, y( )
i
; )
θ . (8.4)
Maximizing this sum is equivalent to maximizing the expectation over the
empirical distribution defined by the training set:
J( ) =
θ Ex,y∼p̂data
log pmodel( ; )
x, y θ . (8.5)
Most of the properties of the objective function J used by most of our opti-
mization algorithms are also expectations over the training set. For example, the
277
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
most commonly used property is the gradient:
∇θ J( ) =
θ Ex,y∼p̂data
∇θ log pmodel( ; )
x, y θ . (8.6)
Computing this expectation exactly is very expensive because it requires
evaluating the model on every example in the entire dataset. In practice, we can
compute these expectations by randomly sampling a small number of examples
from the dataset, then taking the average over only those examples.
Recall that the standard error of the mean (equation ) estimated from
5.46 n
samples is given by σ/
√
n, where σ is the true standard deviation of the value of
the samples. The denominator of
√
n shows that there are less than linear returns
to using more examples to estimate the gradient. Compare two hypothetical
estimates of the gradient, one based on 100 examples and another based on 10,000
examples. The latter requires 100 times more computation than the former, but
reduces the standard error of the mean only by a factor of 10. Most optimization
algorithms converge much faster (in terms of total computation, not in terms of
number of updates) if they are allowed to rapidly compute approximate estimates
of the gradient rather than slowly computing the exact gradient.
Another consideration motivating statistical estimation of the gradient from a
small number of samples is redundancy in the training set. In the worst case, all
m samples in the training set could be identical copies of each other. A sampling-
based estimate of the gradient could compute the correct gradient with a single
sample, using m times less computation than the naive approach. In practice, we
are unlikely to truly encounter this worst-case situation, but we may find large
numbers of examples that all make very similar contributions to the gradient.
Optimization algorithms that use the entire training set are called batch or
deterministic gradient methods, because they process all of the training examples
simultaneously in a large batch. This terminology can be somewhat confusing
because the word “batch” is also often used to describe the minibatch used by
minibatch stochastic gradient descent. Typically the term “batch gradient descent”
implies the use of the full training set, while the use of the term “batch” to describe
a group of examples does not. For example, it is very common to use the term
“batch size” to describe the size of a minibatch.
Optimization algorithms that use only a single example at a time are sometimes
called stochastic or sometimes online methods. The term online is usually
reserved for the case where the examples are drawn from a stream of continually
created examples rather than from a fixed-size training set over which several
passes are made.
Most algorithms used for deep learning fall somewhere in between, using more
278
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
than one but less than all of the training examples. These were traditionally called
minibatch or minibatch stochastic methods and it is now common to simply
call them stochastic methods.
The canonical example of a stochastic method is stochastic gradient descent,
presented in detail in section .
8.3.1
Minibatch sizes are generally driven by the following factors:
• Larger batches provide a more accurate estimate of the gradient, but with
less than linear returns.
• Multicore architectures are usually underutilized by extremely small batches.
This motivates using some absolute minimum batch size, below which there
is no reduction in the time to process a minibatch.
• If all examples in the batch are to be processed in parallel (as is typically
the case), then the amount of memory scales with the batch size. For many
hardware setups this is the limiting factor in batch size.
• Some kinds of hardware achieve better runtime with specific sizes of arrays.
Especially when using GPUs, it is common for power of 2 batch sizes to offer
better runtime. Typical power of 2 batch sizes range from 32 to 256, with 16
sometimes being attempted for large models.
• Small batches can offer a regularizing effect ( , ),
Wilson and Martinez 2003
perhaps due to the noise they add to the learning process. Generalization
error is often best for a batch size of 1. Training with such a small batch
size might require a small learning rate to maintain stability due to the high
variance in the estimate of the gradient. The total runtime can be very high
due to the need to make more steps, both because of the reduced learning
rate and because it takes more steps to observe the entire training set.
Different kinds of algorithms use different kinds of information from the mini-
batch in different ways. Some algorithms are more sensitive to sampling error than
others, either because they use information that is difficult to estimate accurately
with few samples, or because they use information in ways that amplify sampling
errors more. Methods that compute updates based only on the gradient g are
usually relatively robust and can handle smaller batch sizes like 100. Second-order
methods, which use also the Hessian matrix H and compute updates such as
H−1g, typically require much larger batch sizes like 10,000. These large batch
sizes are required to minimize fluctuations in the estimates of H−1
g. Suppose
that H is estimated perfectly but has a poor condition number. Multiplication by
279
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
H or its inverse amplifies pre-existing errors, in this case, estimation errors in g.
Very small changes in the estimate of g can thus cause large changes in the update
H−1
g, even if H were estimated perfectly. Of course, H will be estimated only
approximately, so the update H−1
g will contain even more error than we would
predict from applying a poorly conditioned operation to the estimate of .
g
It is also crucial that the minibatches be selected randomly. Computing an
unbiased estimate of the expected gradient from a set of samples requires that those
samples be independent. We also wish for two subsequent gradient estimates to be
independent from each other, so two subsequent minibatches of examples should
also be independent from each other. Many datasets are most naturally arranged
in a way where successive examples are highly correlated. For example, we might
have a dataset of medical data with a long list of blood sample test results. This
list might be arranged so that first we have five blood samples taken at different
times from the first patient, then we have three blood samples taken from the
second patient, then the blood samples from the third patient, and so on. If we
were to draw examples in order from this list, then each of our minibatches would
be extremely biased, because it would represent primarily one patient out of the
many patients in the dataset. In cases such as these where the order of the dataset
holds some significance, it is necessary to shuffle the examples before selecting
minibatches. For very large datasets, for example datasets containing billions of
examples in a data center, it can be impractical to sample examples truly uniformly
at random every time we want to construct a minibatch. Fortunately, in practice
it is usually sufficient to shuffle the order of the dataset once and then store it in
shuffled fashion. This will impose a fixed set of possible minibatches of consecutive
examples that all models trained thereafter will use, and each individual model
will be forced to reuse this ordering every time it passes through the training
data. However, this deviation from true random selection does not seem to have a
significant detrimental effect. Failing to ever shuffle the examples in any way can
seriously reduce the effectiveness of the algorithm.
Many optimization problems in machine learning decompose over examples
well enough that we can compute entire separate updates over different examples
in parallel. In other words, we can compute the update that minimizes J(X) for
one minibatch of examples X at the same time that we compute the update for
several other minibatches. Such asynchronous parallel distributed approaches are
discussed further in section .
12.1.3
An interesting motivation for minibatch stochastic gradient descent is that it
follows the gradient of the true generalization error (equation ) so long as no
8.2
examples are repeated. Most implementations of minibatch stochastic gradient
280
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
descent shuffle the dataset once and then pass through it multiple times. On the
first pass, each minibatch is used to compute an unbiased estimate of the true
generalization error. On the second pass, the estimate becomes biased because it is
formed by re-sampling values that have already been used, rather than obtaining
new fair samples from the data generating distribution.
The fact that stochastic gradient descent minimizes generalization error is
easiest to see in the online learning case, where examples or minibatches are drawn
from a stream of data. In other words, instead of receiving a fixed-size training
set, the learner is similar to a living being who sees a new example at each instant,
with every example (x, y) coming from the data generating distribution pdata(x, y).
In this scenario, examples are never repeated; every experience is a fair sample
from pdata.
The equivalence is easiest to derive when both x and y are discrete. In this
case, the generalization error (equation ) can be written as a sum
8.2
J∗
( ) =
θ

x

y
pdata( ) ( ( ; ) )
x, y L f x θ , y , (8.7)
with the exact gradient
g = ∇θ J ∗
( ) =
θ

x

y
pdata( )
x, y ∇θ L f , y .
( ( ; )
x θ ) (8.8)
We have already seen the same fact demonstrated for the log-likelihood in equa-
tion and equation ; we observe now that this holds for other functions
8.5 8.6 L
besides the likelihood. A similar result can be derived when x and y are continuous,
under mild assumptions regarding pdata and .
L
Hence, we can obtain an unbiased estimator of the exact gradient of the
generalization error by sampling a minibatch of examples {x(1)
, . . . x( )
m } with cor-
responding targets y( )
i
from the data generating distribution pdata , and computing
the gradient of the loss with respect to the parameters for that minibatch:
ĝ =
1
m
∇θ

i
L f
( (x( )
i
; )
θ , y( )
i
). (8.9)
Updating in the direction of
θ ĝ performs SGD on the generalization error.
Of course, this interpretation only applies when examples are not reused.
Nonetheless, it is usually best to make several passes through the training set,
unless the training set is extremely large. When multiple such epochs are used,
only the first epoch follows the unbiased gradient of the generalization error, but
281
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
of course, the additional epochs usually provide enough benefit due to decreased
training error to offset the harm they cause by increasing the gap between training
error and test error.
With some datasets growing rapidly in size, faster than computing power, it
is becoming more common for machine learning applications to use each training
example only once or even to make an incomplete pass through the training
set. When using an extremely large training set, overfitting is not an issue, so
underfitting and computational efficiency become the predominant concerns. See
also ( ) for a discussion of the effect of computational
Bottou and Bousquet 2008
bottlenecks on generalization error, as the number of training examples grows.
8.2 Challenges in Neural Network Optimization
Optimization in general is an extremely difficult task. Traditionally, machine
learning has avoided the difficulty of general optimization by carefully designing
the objective function and constraints to ensure that the optimization problem is
convex. When training neural networks, we must confront the general non-convex
case. Even convex optimization is not without its complications. In this section,
we summarize several of the most prominent challenges involved in optimization
for training deep models.
8.2.1 Ill-Conditioning
Some challenges arise even when optimizing convex functions. Of these, the most
prominent is ill-conditioning of the Hessian matrix H. This is a very general
problem in most numerical optimization, convex or otherwise, and is described in
more detail in section .
4.3.1
The ill-conditioning problem is generally believed to be present in neural
network training problems. Ill-conditioning can manifest by causing SGD to get
“stuck” in the sense that even very small steps increase the cost function.
Recall from equation that a second-order Taylor series expansion of the
4.9
cost function predicts that a gradient descent step of will add
−g
1
2
2
g
Hg g
−  
g (8.10)
to the cost. Ill-conditioning of the gradient becomes a problem when 1
22gHg
exceeds g
g. To determine whether ill-conditioning is detrimental to a neural
network training task, one can monitor the squared gradient norm g
g and
282
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
−50 0 50 100 150 200 250
Training time (epochs)
−2
0
2
4
6
8
10
12
14
16
Gradient
norm
0 50 100 150 200 250
Training time (epochs)
0 1
.
0 2
.
0 3
.
0 4
.
0 5
.
0 6
.
0 7
.
0 8
.
0 9
.
1 0
.
Classification
error
rate
Figure 8.1: Gradient descent often does not arrive at a critical point of any kind. In this
example, the gradient norm increases throughout training of a convolutional network used
for object detection. (Left)A scatterplot showing how the norms of individual gradient
evaluations are distributed over time. To improve legibility, only one gradient norm
is plotted per epoch. The running average of all gradient norms is plotted as a solid
curve. The gradient norm clearly increases over time, rather than decreasing as we would
expect if the training process converged to a critical point. Despite the increasing
(Right)
gradient, the training process is reasonably successful. The validation set classification
error decreases to a low level.
the gHg term. In many cases, the gradient norm does not shrink significantly
throughout learning, but the gHg term grows by more than an order of magnitude.
The result is that learning becomes very slow despite the presence of a strong
gradient because the learning rate must be shrunk to compensate for even stronger
curvature. Figure shows an example of the gradient increasing significantly
8.1
during the successful training of a neural network.
Though ill-conditioning is present in other settings besides neural network
training, some of the techniques used to combat it in other contexts are less
applicable to neural networks. For example, Newton’s method is an excellent tool
for minimizing convex functions with poorly conditioned Hessian matrices, but in
the subsequent sections we will argue that Newton’s method requires significant
modification before it can be applied to neural networks.
8.2.2 Local Minima
One of the most prominent features of a convex optimization problem is that it
can be reduced to the problem of finding a local minimum. Any local minimum is
283
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
guaranteed to be a global minimum. Some convex functions have a flat region at
the bottom rather than a single global minimum point, but any point within such
a flat region is an acceptable solution. When optimizing a convex function, we
know that we have reached a good solution if we find a critical point of any kind.
With non-convex functions, such as neural nets, it is possible to have many
local minima. Indeed, nearly any deep model is essentially guaranteed to have
an extremely large number of local minima. However, as we will see, this is not
necessarily a major problem.
Neural networks and any models with multiple equivalently parametrized latent
variables all have multiple local minima because of the model identifiability
problem. A model is said to be identifiable if a sufficiently large training set can
rule out all but one setting of the model’s parameters. Models with latent variables
are often not identifiable because we can obtain equivalent models by exchanging
latent variables with each other. For example, we could take a neural network and
modify layer 1 by swapping the incoming weight vector for unit i with the incoming
weight vector for unit j, then doing the same for the outgoing weight vectors. If we
have m layers with n units each, then there are n!m
ways of arranging the hidden
units. This kind of non-identifiability is known as weight space symmetry.
In addition to weight space symmetry, many kinds of neural networks have
additional causes of non-identifiability. For example, in any rectified linear or
maxout network, we can scale all of the incoming weights and biases of a unit by
α if we also scale all of its outgoing weights by 1
α. This means that—if the cost
function does not include terms such as weight decay that depend directly on the
weights rather than the models’ outputs—every local minimum of a rectified linear
or maxout network lies on an (m n
× )-dimensional hyperbola of equivalent local
minima.
These model identifiability issues mean that there can be an extremely large
or even uncountably infinite amount of local minima in a neural network cost
function. However, all of these local minima arising from non-identifiability are
equivalent to each other in cost function value. As a result, these local minima are
not a problematic form of non-convexity.
Local minima can be problematic if they have high cost in comparison to the
global minimum. One can construct small neural networks, even without hidden
units, that have local minima with higher cost than the global minimum (Sontag
and Sussman 1989 Brady 1989 Gori and Tesi 1992
, ; et al., ; , ). If local minima
with high cost are common, this could pose a serious problem for gradient-based
optimization algorithms.
It remains an open question whether there are many local minima of high cost
284
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
for networks of practical interest and whether optimization algorithms encounter
them. For many years, most practitioners believed that local minima were a
common problem plaguing neural network optimization. Today, that does not
appear to be the case. The problem remains an active area of research, but experts
now suspect that, for sufficiently large neural networks, most local minima have a
low cost function value, and that it is not important to find a true global minimum
rather than to find a point in parameter space that has low but not minimal cost
( , ; , ; , ;
Saxe et al. 2013 Dauphin et al. 2014 Goodfellow et al. 2015 Choromanska
et al., ).
2014
Many practitioners attribute nearly all difficulty with neural network optimiza-
tion to local minima. We encourage practitioners to carefully test for specific
problems. A test that can rule out local minima as the problem is to plot the
norm of the gradient over time. If the norm of the gradient does not shrink to
insignificant size, the problem is neither local minima nor any other kind of critical
point. This kind of negative test can rule out local minima. In high dimensional
spaces, it can be very difficult to positively establish that local minima are the
problem. Many structures other than local minima also have small gradients.
8.2.3 Plateaus, Saddle Points and Other Flat Regions
For many high-dimensional non-convex functions, local minima (and maxima)
are in fact rare compared to another kind of point with zero gradient: a saddle
point. Some points around a saddle point have greater cost than the saddle point,
while others have a lower cost. At a saddle point, the Hessian matrix has both
positive and negative eigenvalues. Points lying along eigenvectors associated with
positive eigenvalues have greater cost than the saddle point, while points lying
along negative eigenvalues have lower value. We can think of a saddle point as
being a local minimum along one cross-section of the cost function and a local
maximum along another cross-section. See figure for an illustration.
4.5
Many classes of random functions exhibit the following behavior: in low-
dimensional spaces, local minima are common. In higher dimensional spaces, local
minima are rare and saddle points are more common. For a function f : Rn → R of
this type, the expected ratio of the number of saddle points to local minima grows
exponentially with n. To understand the intuition behind this behavior, observe
that the Hessian matrix at a local minimum has only positive eigenvalues. The
Hessian matrix at a saddle point has a mixture of positive and negative eigenvalues.
Imagine that the sign of each eigenvalue is generated by flipping a coin. In a single
dimension, it is easy to obtain a local minimum by tossing a coin and getting heads
once. In n-dimensional space, it is exponentially unlikely that all n coin tosses will
285
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
be heads. See ( ) for a review of the relevant theoretical work.
Dauphin et al. 2014
An amazing property of many random functions is that the eigenvalues of the
Hessian become more likely to be positive as we reach regions of lower cost. In
our coin tossing analogy, this means we are more likely to have our coin come up
heads n times if we are at a critical point with low cost. This means that local
minima are much more likely to have low cost than high cost. Critical points with
high cost are far more likely to be saddle points. Critical points with extremely
high cost are more likely to be local maxima.
This happens for many classes of random functions. Does it happen for neural
networks? ( ) showed theoretically that shallow autoencoders
Baldi and Hornik 1989
(feedforward networks trained to copy their input to their output, described in
chapter ) with no nonlinearities have global minima and saddle points but no
14
local minima with higher cost than the global minimum. They observed without
proof that these results extend to deeper networks without nonlinearities. The
output of such networks is a linear function of their input, but they are useful
to study as a model of nonlinear neural networks because their loss function is
a non-convex function of their parameters. Such networks are essentially just
multiple matrices composed together. ( ) provided exact solutions
Saxe et al. 2013
to the complete learning dynamics in such networks and showed that learning in
these models captures many of the qualitative features observed in the training of
deep models with nonlinear activation functions. ( ) showed
Dauphin et al. 2014
experimentally that real neural networks also have loss functions that contain very
many high-cost saddle points. Choromanska 2014
et al. ( ) provided additional
theoretical arguments, showing that another class of high-dimensional random
functions related to neural networks does so as well.
What are the implications of the proliferation of saddle points for training algo-
rithms? For first-order optimization algorithms that use only gradient information,
the situation is unclear. The gradient can often become very small near a saddle
point. On the other hand, gradient descent empirically seems to be able to escape
saddle points in many cases. ( ) provided visualizations of
Goodfellow et al. 2015
several learning trajectories of state-of-the-art neural networks, with an example
given in figure . These visualizations show a flattening of the cost function near
8.2
a prominent saddle point where the weights are all zero, but they also show the
gradient descent trajectory rapidly escaping this region. ( )
Goodfellow et al. 2015
also argue that continuous-time gradient descent may be shown analytically to be
repelled from, rather than attracted to, a nearby saddle point, but the situation
may be different for more realistic uses of gradient descent.
For Newton’s method, it is clear that saddle points constitute a problem.
286
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Projection 2 of θ
Projection 1 of θ
J(
)
θ
Figure 8.2: A visualization of the cost function of a neural network. Image adapted
with permission from Goodfellow 2015
et al. ( ). These visualizations appear similar for
feedforward neural networks, convolutional networks, and recurrent networks applied
to real object recognition and natural language processing tasks. Surprisingly, these
visualizations usually do not show many conspicuous obstacles. Prior to the success of
stochastic gradient descent for training very large models beginning in roughly 2012,
neural net cost function surfaces were generally believed to have much more non-convex
structure than is revealed by these projections. The primary obstacle revealed by this
projection is a saddle point of high cost near where the parameters are initialized, but, as
indicated by the blue path, the SGD training trajectory escapes this saddle point readily.
Most of training time is spent traversing the relatively flat valley of the cost function,
which may be due to high noise in the gradient, poor conditioning of the Hessian matrix
in this region, or simply the need to circumnavigate the tall “mountain” visible in the
figure via an indirect arcing path.
287
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Gradient descent is designed to move “downhill” and is not explicitly designed
to seek a critical point. Newton’s method, however, is designed to solve for a
point where the gradient is zero. Without appropriate modification, it can jump
to a saddle point. The proliferation of saddle points in high dimensional spaces
presumably explains why second-order methods have not succeeded in replacing
gradient descent for neural network training. ( ) introduced a
Dauphin et al. 2014
saddle-free Newton method for second-order optimization and showed that it
improves significantly over the traditional version. Second-order methods remain
difficult to scale to large neural networks, but this saddle-free approach holds
promise if it could be scaled.
There are other kinds of points with zero gradient besides minima and saddle
points. There are also maxima, which are much like saddle points from the
perspective of optimization—many algorithms are not attracted to them, but
unmodified Newton’s method is. Maxima of many classes of random functions
become exponentially rare in high dimensional space, just like minima do.
There may also be wide, flat regions of constant value. In these locations, the
gradient and also the Hessian are all zero. Such degenerate locations pose major
problems for all numerical optimization algorithms. In a convex problem, a wide,
flat region must consist entirely of global minima, but in a general optimization
problem, such a region could correspond to a high value of the objective function.
8.2.4 Cliffs and Exploding Gradients
Neural networks with many layers often have extremely steep regions resembling
cliffs, as illustrated in figure . These result from the multiplication of several
8.3
large weights together. On the face of an extremely steep cliff structure, the
gradient update step can move the parameters extremely far, usually jumping off
of the cliff structure altogether.
288
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS







Figure 8.3: The objective function for highly nonlinear deep neural networks or for
recurrent neural networks often contains sharp nonlinearities in parameter space resulting
from the multiplication of several parameters. These nonlinearities give rise to very
high derivatives in some places. When the parameters get close to such a cliff region, a
gradient descent update can catapult the parameters very far, possibly losing most of the
optimization work that had been done. Figure adapted with permission from Pascanu
et al. ( ).
2013
The cliff can be dangerous whether we approach it from above or from below,
but fortunately its most serious consequences can be avoided using the gradient
clipping heuristic described in section . The basic idea is to recall that
10.11.1
the gradient does not specify the optimal step size, but only the optimal direction
within an infinitesimal region. When the traditional gradient descent algorithm
proposes to make a very large step, the gradient clipping heuristic intervenes to
reduce the step size to be small enough that it is less likely to go outside the region
where the gradient indicates the direction of approximately steepest descent. Cliff
structures are most common in the cost functions for recurrent neural networks,
because such models involve a multiplication of many factors, with one factor
for each time step. Long temporal sequences thus incur an extreme amount of
multiplication.
8.2.5 Long-Term Dependencies
Another difficulty that neural network optimization algorithms must overcome
arises when the computational graph becomes extremely deep. Feedforward
networks with many layers have such deep computational graphs. So do recurrent
networks, described in chapter , which construct very deep computational graphs
10
289
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
by repeatedly applying the same operation at each time step of a long temporal
sequence. Repeated application of the same parameters gives rise to especially
pronounced difficulties.
For example, suppose that a computational graph contains a path that consists
of repeatedly multiplying by a matrix W. After t steps, this is equivalent to mul-
tiplying by Wt . Suppose that W has an eigendecomposition W = V diag(λ)V −1.
In this simple case, it is straightforward to see that
W t
=

V λ V
diag( ) −1
t
= ( )
V diag λ t
V −1
. (8.11)
Any eigenvalues λi that are not near an absolute value of will either explode if they
1
are greater than in magnitude or vanish if they are less than in magnitude. The
1 1
vanishing and exploding gradient problem refers to the fact that gradients
through such a graph are also scaled according to diag(λ )t. Vanishing gradients
make it difficult to know which direction the parameters should move to improve
the cost function, while exploding gradients can make learning unstable. The cliff
structures described earlier that motivate gradient clipping are an example of the
exploding gradient phenomenon.
The repeated multiplication by W at each time step described here is very
similar to the power method algorithm used to find the largest eigenvalue of
a matrix W and the corresponding eigenvector. From this point of view it is
not surprising that xWt will eventually discard all components of x that are
orthogonal to the principal eigenvector of .
W
Recurrent networks use the same matrix W at each time step, but feedforward
networks do not, so even very deep feedforward networks can largely avoid the
vanishing and exploding gradient problem ( , ).
Sussillo 2014
We defer a further discussion of the challenges of training recurrent networks
until section , after recurrent networks have been described in more detail.
10.7
8.2.6 Inexact Gradients
Most optimization algorithms are designed with the assumption that we have
access to the exact gradient or Hessian matrix. In practice, we usually only have
a noisy or even biased estimate of these quantities. Nearly every deep learning
algorithm relies on sampling-based estimates at least insofar as using a minibatch
of training examples to compute the gradient.
In other cases, the objective function we want to minimize is actually intractable.
When the objective function is intractable, typically its gradient is intractable as
well. In such cases we can only approximate the gradient. These issues mostly arise
290
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
with the more advanced models in part . For example, contrastive divergence
III
gives a technique for approximating the gradient of the intractable log-likelihood
of a Boltzmann machine.
Various neural network optimization algorithms are designed to account for
imperfections in the gradient estimate. One can also avoid the problem by choosing
a surrogate loss function that is easier to approximate than the true loss.
8.2.7 Poor Correspondence between Local and Global Structure
Many of the problems we have discussed so far correspond to properties of the
loss function at a single point—it can be difficult to make a single step if J(θ) is
poorly conditioned at the current point θ, or if θ lies on a cliff, or if θ is a saddle
point hiding the opportunity to make progress downhill from the gradient.
It is possible to overcome all of these problems at a single point and still
perform poorly if the direction that results in the most improvement locally does
not point toward distant regions of much lower cost.
Goodfellow 2015
et al. ( ) argue that much of the runtime of training is due to
the length of the trajectory needed to arrive at the solution. Figure shows that
8.2
the learning trajectory spends most of its time tracing out a wide arc around a
mountain-shaped structure.
Much of research into the difficulties of optimization has focused on whether
training arrives at a global minimum, a local minimum, or a saddle point, but in
practice neural networks do not arrive at a critical point of any kind. Figure 8.1
shows that neural networks often do not arrive at a region of small gradient. Indeed,
such critical points do not even necessarily exist. For example, the loss function
− log p(y | x; θ) can lack a global minimum point and instead asymptotically
approach some value as the model becomes more confident. For a classifier with
discrete y and p(y | x) provided by a softmax, the negative log-likelihood can
become arbitrarily close to zero if the model is able to correctly classify every
example in the training set, but it is impossible to actually reach the value of
zero. Likewise, a model of real values p(y | x) = N(y;f(θ), β−1
) can have negative
log-likelihood that asymptotes to negative infinity—if f(θ) is able to correctly
predict the value of all training set y targets, the learning algorithm will increase
β without bound. See figure for an example of a failure of local optimization to
8.4
find a good cost function value even in the absence of any local minima or saddle
points.
Future research will need to develop further understanding of the factors that
influence the length of the learning trajectory and better characterize the outcome
291
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
θ
J(
)
θ
Figure 8.4: Optimization based on local downhill moves can fail if the local surface does
not point toward the global solution. Here we provide an example of how this can occur,
even if there are no saddle points and no local minima. This example cost function
contains only asymptotes toward low values, not minima. The main cause of difficulty in
this case is being initialized on the wrong side of the “mountain” and not being able to
traverse it. In higher dimensional space, learning algorithms can often circumnavigate
such mountains but the trajectory associated with doing so may be long and result in
excessive training time, as illustrated in figure .
8.2
of the process.
Many existing research directions are aimed at finding good initial points for
problems that have difficult global structure, rather than developing algorithms
that use non-local moves.
Gradient descent and essentially all learning algorithms that are effective for
training neural networks are based on making small, local moves. The previous
sections have primarily focused on how the correct direction of these local moves
can be difficult to compute. We may be able to compute some properties of the
objective function, such as its gradient, only approximately, with bias or variance
in our estimate of the correct direction. In these cases, local descent may or may
not define a reasonably short path to a valid solution, but we are not actually
able to follow the local descent path. The objective function may have issues
such as poor conditioning or discontinuous gradients, causing the region where
the gradient provides a good model of the objective function to be very small. In
these cases, local descent with steps of size  may define a reasonably short path
to the solution, but we are only able to compute the local descent direction with
steps of size δ 
 . In these cases, local descent may or may not define a path
to the solution, but the path contains many steps, so following the path incurs a
292
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
high computational cost. Sometimes local information provides us no guide, when
the function has a wide flat region, or if we manage to land exactly on a critical
point (usually this latter scenario only happens to methods that solve explicitly
for critical points, such as Newton’s method). In these cases, local descent does
not define a path to a solution at all. In other cases, local moves can be too greedy
and lead us along a path that moves downhill but away from any solution, as in
figure , or along an unnecessarily long trajectory to the solution, as in figure .
8.4 8.2
Currently, we do not understand which of these problems are most relevant to
making neural network optimization difficult, and this is an active area of research.
Regardless of which of these problems are most significant, all of them might be
avoided if there exists a region of space connected reasonably directly to a solution
by a path that local descent can follow, and if we are able to initialize learning
within that well-behaved region. This last view suggests research into choosing
good initial points for traditional optimization algorithms to use.
8.2.8 Theoretical Limits of Optimization
Several theoretical results show that there are limits on the performance of any
optimization algorithm we might design for neural networks (Blum and Rivest,
1992 Judd 1989 Wolpert and MacReady 1997
; , ; , ). Typically these results have
little bearing on the use of neural networks in practice.
Some theoretical results apply only to the case where the units of a neural
network output discrete values. However, most neural network units output
smoothly increasing values that make optimization via local search feasible. Some
theoretical results show that there exist problem classes that are intractable, but
it can be difficult to tell whether a particular problem falls into that class. Other
results show that finding a solution for a network of a given size is intractable, but
in practice we can find a solution easily by using a larger network for which many
more parameter settings correspond to an acceptable solution. Moreover, in the
context of neural network training, we usually do not care about finding the exact
minimum of a function, but seek only to reduce its value sufficiently to obtain good
generalization error. Theoretical analysis of whether an optimization algorithm
can accomplish this goal is extremely difficult. Developing more realistic bounds
on the performance of optimization algorithms therefore remains an important
goal for machine learning research.
293
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.3 Basic Algorithms
We have previously introduced the gradient descent (section ) algorithm that
4.3
follows the gradient of an entire training set downhill. This may be accelerated
considerably by using stochastic gradient descent to follow the gradient of randomly
selected minibatches downhill, as discussed in section and section .
5.9 8.1.3
8.3.1 Stochastic Gradient Descent
Stochastic gradient descent (SGD) and its variants are probably the most used
optimization algorithms for machine learning in general and for deep learning
in particular. As discussed in section , it is possible to obtain an unbiased
8.1.3
estimate of the gradient by taking the average gradient on a minibatch of m
examples drawn i.i.d from the data generating distribution.
Algorithm shows how to follow this estimate of the gradient downhill.
8.1
Algorithm 8.1 Stochastic gradient descent (SGD) update at training iteration k
Require: Learning rate k.
Require: Initial parameter θ
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1)
, . . . , x( )
m
} with
corresponding targets y( )
i
.
Compute gradient estimate: ĝ ← + 1
m ∇θ

i L f
( (x( )
i
; )
θ , y( )
i
)
Apply update: θ θ
← − ĝ
end while
A crucial parameter for the SGD algorithm is the learning rate. Previously, we
have described SGD as using a fixed learning rate . In practice, it is necessary to
gradually decrease the learning rate over time, so we now denote the learning rate
at iteration as
k k.
This is because the SGD gradient estimator introduces a source of noise (the
random sampling of m training examples) that does not vanish even when we arrive
at a minimum. By comparison, the true gradient of the total cost function becomes
small and then 0 when we approach and reach a minimum using batch gradient
descent, so batch gradient descent can use a fixed learning rate. A sufficient
condition to guarantee convergence of SGD is that
∞

k=1
k = and
∞, (8.12)
294
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
∞

k=1
2
k < .
∞ (8.13)
In practice, it is common to decay the learning rate linearly until iteration :
τ
k = (1 )
− α 0 + ατ (8.14)
with α = k
τ . After iteration , it is common to leave constant.
τ 
The learning rate may be chosen by trial and error, but it is usually best
to choose it by monitoring learning curves that plot the objective function as a
function of time. This is more of an art than a science, and most guidance on this
subject should be regarded with some skepticism. When using the linear schedule,
the parameters to choose are 0, τ , and τ. Usually τ may be set to the number of
iterations required to make a few hundred passes through the training set. Usually
τ should be set to roughly the value of
1% 0. The main question is how to set 0.
If it is too large, the learning curve will show violent oscillations, with the cost
function often increasing significantly. Gentle oscillations are fine, especially if
training with a stochastic cost function such as the cost function arising from the
use of dropout. If the learning rate is too low, learning proceeds slowly, and if the
initial learning rate is too low, learning may become stuck with a high cost value.
Typically, the optimal initial learning rate, in terms of total training time and the
final cost value, is higher than the learning rate that yields the best performance
after the first 100 iterations or so. Therefore, it is usually best to monitor the first
several iterations and use a learning rate that is higher than the best-performing
learning rate at this time, but not so high that it causes severe instability.
The most important property of SGD and related minibatch or online gradient-
based optimization is that computation time per update does not grow with the
number of training examples. This allows convergence even when the number
of training examples becomes very large. For a large enough dataset, SGD may
converge to within some fixed tolerance of its final test set error before it has
processed the entire training set.
To study the convergence rate of an optimization algorithm it is common to
measure the excess error J(θ) − minθ J(θ), which is the amount that the current
cost function exceeds the minimum possible cost. When SGD is applied to a convex
problem, the excess error is O( 1
√
k
) after k iterations, while in the strongly convex
case it is O( 1
k). These bounds cannot be improved unless extra conditions are
assumed. Batch gradient descent enjoys better convergence rates than stochastic
gradient descent in theory. However, the Cramér-Rao bound ( , ; ,
Cramér 1946 Rao
1945) states that generalization error cannot decrease faster than O(1
k ). Bottou
295
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
and Bousquet 2008
( ) argue that it therefore may not be worthwhile to pursue
an optimization algorithm that converges faster than O( 1
k ) for machine learning
tasks—faster convergence presumably corresponds to overfitting. Moreover, the
asymptotic analysis obscures many advantages that stochastic gradient descent
has after a small number of steps. With large datasets, the ability of SGD to make
rapid initial progress while evaluating the gradient for only very few examples
outweighs its slow asymptotic convergence. Most of the algorithms described in
the remainder of this chapter achieve benefits that matter in practice but are lost
in the constant factors obscured by the O( 1
k) asymptotic analysis. One can also
trade off the benefits of both batch and stochastic gradient descent by gradually
increasing the minibatch size during the course of learning.
For more information on SGD, see ( ).
Bottou 1998
8.3.2 Momentum
While stochastic gradient descent remains a very popular optimization strategy,
learning with it can sometimes be slow. The method of momentum (Polyak 1964
, )
is designed to accelerate learning, especially in the face of high curvature, small but
consistent gradients, or noisy gradients. The momentum algorithm accumulates
an exponentially decaying moving average of past gradients and continues to move
in their direction. The effect of momentum is illustrated in figure .
8.5
Formally, the momentum algorithm introduces a variable v that plays the role
of velocity—it is the direction and speed at which the parameters move through
parameter space. The velocity is set to an exponentially decaying average of the
negative gradient. The name momentum derives from a physical analogy, in
which the negative gradient is a force moving a particle through parameter space,
according to Newton’s laws of motion. Momentum in physics is mass times velocity.
In the momentum learning algorithm, we assume unit mass, so the velocity vectorv
may also be regarded as the momentum of the particle. A hyperparameter α ∈ [0,1)
determines how quickly the contributions of previous gradients exponentially decay.
The update rule is given by:
v v
← α − ∇
 θ

1
m
m

i=1
L( (
f x( )
i
; )
θ , y( )
i
)

, (8.15)
θ θ v
← + . (8.16)
The velocity v accumulates the gradient elements ∇θ
 1
m
m
i=1 L( (
f x( )
i ; )
θ , y( )
i )

.
The larger α is relative to , the more previous gradients affect the current direction.
The SGD algorithm with momentum is given in algorithm .
8.2
296
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
− − −
30 20 10 0 10 20
−30
−20
−10
0
10
20
Figure 8.5: Momentum aims primarily to solve two problems: poor conditioning of the
Hessian matrix and variance in the stochastic gradient. Here, we illustrate how momentum
overcomes the first of these two problems. The contour lines depict a quadratic loss
function with a poorly conditioned Hessian matrix. The red path cutting across the
contours indicates the path followed by the momentum learning rule as it minimizes this
function. At each step along the way, we draw an arrow indicating the step that gradient
descent would take at that point. We can see that a poorly conditioned quadratic objective
looks like a long, narrow valley or canyon with steep sides. Momentum correctly traverses
the canyon lengthwise, while gradient steps waste time moving back and forth across the
narrow axis of the canyon. Compare also figure , which shows the behavior of gradient
4.6
descent without momentum.
297
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Previously, the size of the step was simply the norm of the gradient multiplied
by the learning rate. Now, the size of the step depends on how large and how
aligned a sequence of gradients are. The step size is largest when many successive
gradients point in exactly the same direction. If the momentum algorithm always
observes gradient g, then it will accelerate in the direction of −g, until reaching a
terminal velocity where the size of each step is
|| ||
g
1 − α
. (8.17)
It is thus helpful to think of the momentum hyperparameter in terms of 1
1−α. For
example, α = .9 corresponds to multiplying the maximum speed by relative to
10
the gradient descent algorithm.
Common values of α used in practice include .5, .9, and .99. Like the learning
rate, α may also be adapted over time. Typically it begins with a small value and
is later raised. It is less important to adapt α over time than to shrink  over time.
Algorithm 8.2 Stochastic gradient descent (SGD) with momentum
Require: Learning rate , momentum parameter .
 α
Require: Initial parameter , initial velocity .
θ v
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1), . . . , x( )
m } with
corresponding targets y( )
i .
Compute gradient estimate: g ← 1
m ∇θ

i L f
( (x( )
i ; )
θ , y( )
i )
Compute velocity update: v v g
← α − 
Apply update: θ θ v
← +
end while
We can view the momentum algorithm as simulating a particle subject to
continuous-time Newtonian dynamics. The physical analogy can help to build
intuition for how the momentum and gradient descent algorithms behave.
The position of the particle at any point in time is given by θ(t). The particle
experiences net force . This force causes the particle to accelerate:
f( )
t
f( ) =
t
∂2
∂t2 θ( )
t . (8.18)
Rather than viewing this as a second-order differential equation of the position,
we can introduce the variable v(t) representing the velocity of the particle at time
t and rewrite the Newtonian dynamics as a first-order differential equation:
v( ) =
t
∂
∂t
θ( )
t , (8.19)
298
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
f( ) =
t
∂
∂t
v( )
t . (8.20)
The momentum algorithm then consists of solving the differential equations via
numerical simulation. A simple numerical method for solving differential equations
is Euler’s method, which simply consists of simulating the dynamics defined by
the equation by taking small, finite steps in the direction of each gradient.
This explains the basic form of the momentum update, but what specifically are
the forces? One force is proportional to the negative gradient of the cost function:
−∇θ J (θ). This force pushes the particle downhill along the cost function surface.
The gradient descent algorithm would simply take a single step based on each
gradient, but the Newtonian scenario used by the momentum algorithm instead
uses this force to alter the velocity of the particle. We can think of the particle
as being like a hockey puck sliding down an icy surface. Whenever it descends a
steep part of the surface, it gathers speed and continues sliding in that direction
until it begins to go uphill again.
One other force is necessary. If the only force is the gradient of the cost function,
then the particle might never come to rest. Imagine a hockey puck sliding down
one side of a valley and straight up the other side, oscillating back and forth forever,
assuming the ice is perfectly frictionless. To resolve this problem, we add one
other force, proportional to −v(t). In physics terminology, this force corresponds
to viscous drag, as if the particle must push through a resistant medium such as
syrup. This causes the particle to gradually lose energy over time and eventually
converge to a local minimum.
Why do we use −v(t) and viscous drag in particular? Part of the reason to
use −v(t) is mathematical convenience—an integer power of the velocity is easy
to work with. However, other physical systems have other kinds of drag based
on other integer powers of the velocity. For example, a particle traveling through
the air experiences turbulent drag, with force proportional to the square of the
velocity, while a particle moving along the ground experiences dry friction, with a
force of constant magnitude. We can reject each of these options. Turbulent drag,
proportional to the square of the velocity, becomes very weak when the velocity is
small. It is not powerful enough to force the particle to come to rest. A particle
with a non-zero initial velocity that experiences only the force of turbulent drag
will move away from its initial position forever, with the distance from the starting
point growing like O(log t). We must therefore use a lower power of the velocity.
If we use a power of zero, representing dry friction, then the force is too strong.
When the force due to the gradient of the cost function is small but non-zero, the
constant force due to friction can cause the particle to come to rest before reaching
a local minimum. Viscous drag avoids both of these problems—it is weak enough
299
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
that the gradient can continue to cause motion until a minimum is reached, but
strong enough to prevent motion if the gradient does not justify moving.
8.3.3 Nesterov Momentum
Sutskever 2013
et al. ( ) introduced a variant of the momentum algorithm that was
inspired by Nesterov’s accelerated gradient method ( , , ). The
Nesterov 1983 2004
update rules in this case are given by:
v v
← α − ∇
 θ

1
m
m

i=1
L

f x
( ( )
i
; + )
θ αv , y( )
i


, (8.21)
θ θ v
← + , (8.22)
where the parameters α and  play a similar role as in the standard momentum
method. The difference between Nesterov momentum and standard momentum is
where the gradient is evaluated. With Nesterov momentum the gradient is evaluated
after the current velocity is applied. Thus one can interpret Nesterov momentum
as attempting to add a correction factor to the standard method of momentum.
The complete Nesterov momentum algorithm is presented in algorithm .
8.3
In the convex batch gradient case, Nesterov momentum brings the rate of
convergence of the excess error from O(1/k) (after k steps) to O(1/k2) as shown
by Nesterov 1983
( ). Unfortunately, in the stochastic gradient case, Nesterov
momentum does not improve the rate of convergence.
Algorithm 8.3 Stochastic gradient descent (SGD) with Nesterov momentum
Require: Learning rate , momentum parameter .
 α
Require: Initial parameter , initial velocity .
θ v
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1), . . . , x( )
m } with
corresponding labels y( )
i .
Apply interim update: θ̃ θ v
← + α
Compute gradient (at interim point): g ← 1
m∇θ̃

i L f
( (x( )
i ;θ̃ y
), ( )
i )
Compute velocity update: v v g
← α − 
Apply update: θ θ v
← +
end while
300
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.4 Parameter Initialization Strategies
Some optimization algorithms are not iterative by nature and simply solve for a
solution point. Other optimization algorithms are iterative by nature but, when
applied to the right class of optimization problems, converge to acceptable solutions
in an acceptable amount of time regardless of initialization. Deep learning training
algorithms usually do not have either of these luxuries. Training algorithms for deep
learning models are usually iterative in nature and thus require the user to specify
some initial point from which to begin the iterations. Moreover, training deep
models is a sufficiently difficult task that most algorithms are strongly affected by
the choice of initialization. The initial point can determine whether the algorithm
converges at all, with some initial points being so unstable that the algorithm
encounters numerical difficulties and fails altogether. When learning does converge,
the initial point can determine how quickly learning converges and whether it
converges to a point with high or low cost. Also, points of comparable cost
can have wildly varying generalization error, and the initial point can affect the
generalization as well.
Modern initialization strategies are simple and heuristic. Designing improved
initialization strategies is a difficult task because neural network optimization is
not yet well understood. Most initialization strategies are based on achieving some
nice properties when the network is initialized. However, we do not have a good
understanding of which of these properties are preserved under which circumstances
after learning begins to proceed. A further difficulty is that some initial points
may be beneficial from the viewpoint of optimization but detrimental from the
viewpoint of generalization. Our understanding of how the initial point affects
generalization is especially primitive, offering little to no guidance for how to select
the initial point.
Perhaps the only property known with complete certainty is that the initial
parameters need to “break symmetry” between different units. If two hidden
units with the same activation function are connected to the same inputs, then
these units must have different initial parameters. If they have the same initial
parameters, then a deterministic learning algorithm applied to a deterministic cost
and model will constantly update both of these units in the same way. Even if the
model or training algorithm is capable of using stochasticity to compute different
updates for different units (for example, if one trains with dropout), it is usually
best to initialize each unit to compute a different function from all of the other
units. This may help to make sure that no input patterns are lost in the null
space of forward propagation and no gradient patterns are lost in the null space
of back-propagation. The goal of having each unit compute a different function
301
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
motivates random initialization of the parameters. We could explicitly search
for a large set of basis functions that are all mutually different from each other,
but this often incurs a noticeable computational cost. For example, if we have at
most as many outputs as inputs, we could use Gram-Schmidt orthogonalization
on an initial weight matrix, and be guaranteed that each unit computes a very
different function from each other unit. Random initialization from a high-entropy
distribution over a high-dimensional space is computationally cheaper and unlikely
to assign any units to compute the same function as each other.
Typically, we set the biases for each unit to heuristically chosen constants, and
initialize only the weights randomly. Extra parameters, for example, parameters
encoding the conditional variance of a prediction, are usually set to heuristically
chosen constants much like the biases are.
We almost always initialize all the weights in the model to values drawn
randomly from a Gaussian or uniform distribution. The choice of Gaussian
or uniform distribution does not seem to matter very much, but has not been
exhaustively studied. The scale of the initial distribution, however, does have a
large effect on both the outcome of the optimization procedure and on the ability
of the network to generalize.
Larger initial weights will yield a stronger symmetry breaking effect, helping
to avoid redundant units. They also help to avoid losing signal during forward or
back-propagation through the linear component of each layer—larger values in the
matrix result in larger outputs of matrix multiplication. Initial weights that are
too large may, however, result in exploding values during forward propagation or
back-propagation. In recurrent networks, large weights can also result in chaos
(such extreme sensitivity to small perturbations of the input that the behavior
of the deterministic forward propagation procedure appears random). To some
extent, the exploding gradient problem can be mitigated by gradient clipping
(thresholding the values of the gradients before performing a gradient descent step).
Large weights may also result in extreme values that cause the activation function
to saturate, causing complete loss of gradient through saturated units. These
competing factors determine the ideal initial scale of the weights.
The perspectives of regularization and optimization can give very different
insights into how we should initialize a network. The optimization perspective
suggests that the weights should be large enough to propagate information success-
fully, but some regularization concerns encourage making them smaller. The use
of an optimization algorithm such as stochastic gradient descent that makes small
incremental changes to the weights and tends to halt in areas that are nearer to
the initial parameters (whether due to getting stuck in a region of low gradient, or
302
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
due to triggering some early stopping criterion based on overfitting) expresses a
prior that the final parameters should be close to the initial parameters. Recall
from section that gradient descent with early stopping is equivalent to weight
7.8
decay for some models. In the general case, gradient descent with early stopping is
not the same as weight decay, but does provide a loose analogy for thinking about
the effect of initialization. We can think of initializing the parameters θ to θ0 as
being similar to imposing a Gaussian prior p(θ) with mean θ0 . From this point
of view, it makes sense to choose θ0 to be near 0. This prior says that it is more
likely that units do not interact with each other than that they do interact. Units
interact only if the likelihood term of the objective function expresses a strong
preference for them to interact. On the other hand, if we initialize θ0 to large
values, then our prior specifies which units should interact with each other, and
how they should interact.
Some heuristics are available for choosing the initial scale of the weights. One
heuristic is to initialize the weights of a fully connected layer with m inputs and
n outputs by sampling each weight from U(− 1
√
m
, 1
√
m
), while Glorot and Bengio
( ) suggest using the
2010 normalized initialization
Wi,j ∼ U

−

6
m n
+
,

6
m n
+

. (8.23)
This latter heuristic is designed to compromise between the goal of initializing
all layers to have the same activation variance and the goal of initializing all
layers to have the same gradient variance. The formula is derived using the
assumption that the network consists only of a chain of matrix multiplications,
with no nonlinearities. Real neural networks obviously violate this assumption,
but many strategies designed for the linear model perform reasonably well on its
nonlinear counterparts.
Saxe 2013
et al. ( ) recommend initializing to random orthogonal matrices, with
a carefully chosen scaling or gain factor g that accounts for the nonlinearity applied
at each layer. They derive specific values of the scaling factor for different types of
nonlinear activation functions. This initialization scheme is also motivated by a
model of a deep network as a sequence of matrix multiplies without nonlinearities.
Under such a model, this initialization scheme guarantees that the total number of
training iterations required to reach convergence is independent of depth.
Increasing the scaling factor g pushes the network toward the regime where
activations increase in norm as they propagate forward through the network and
gradients increase in norm as they propagate backward. ( ) showed
Sussillo 2014
that setting the gain factor correctly is sufficient to train networks as deep as
303
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
1,000 layers, without needing to use orthogonal initializations. A key insight of
this approach is that in feedforward networks, activations and gradients can grow
or shrink on each step of forward or back-propagation, following a random walk
behavior. This is because feedforward networks use a different weight matrix at
each layer. If this random walk is tuned to preserve norms, then feedforward
networks can mostly avoid the vanishing and exploding gradients problem that
arises when the same weight matrix is used at each step, described in section .
8.2.5
Unfortunately, these optimal criteria for initial weights often do not lead to
optimal performance. This may be for three different reasons. First, we may
be using the wrong criteria—it may not actually be beneficial to preserve the
norm of a signal throughout the entire network. Second, the properties imposed
at initialization may not persist after learning has begun to proceed. Third, the
criteria might succeed at improving the speed of optimization but inadvertently
increase generalization error. In practice, we usually need to treat the scale of the
weights as a hyperparameter whose optimal value lies somewhere roughly near but
not exactly equal to the theoretical predictions.
One drawback to scaling rules that set all of the initial weights to have the
same standard deviation, such as 1
√
m
, is that every individual weight becomes
extremely small when the layers become large. ( ) introduced an
Martens 2010
alternative initialization scheme called sparse initialization in which each unit is
initialized to have exactly k non-zero weights. The idea is to keep the total amount
of input to the unit independent from the number of inputs m without making the
magnitude of individual weight elements shrink with m. Sparse initialization helps
to achieve more diversity among the units at initialization time. However, it also
imposes a very strong prior on the weights that are chosen to have large Gaussian
values. Because it takes a long time for gradient descent to shrink “incorrect” large
values, this initialization scheme can cause problems for units such as maxout units
that have several filters that must be carefully coordinated with each other.
When computational resources allow it, it is usually a good idea to treat the
initial scale of the weights for each layer as a hyperparameter, and to choose these
scales using a hyperparameter search algorithm described in section , such
11.4.2
as random search. The choice of whether to use dense or sparse initialization
can also be made a hyperparameter. Alternately, one can manually search for
the best initial scales. A good rule of thumb for choosing the initial scales is to
look at the range or standard deviation of activations or gradients on a single
minibatch of data. If the weights are too small, the range of activations across the
minibatch will shrink as the activations propagate forward through the network.
By repeatedly identifying the first layer with unacceptably small activations and
304
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
increasing its weights, it is possible to eventually obtain a network with reasonable
initial activations throughout. If learning is still too slow at this point, it can be
useful to look at the range or standard deviation of the gradients as well as the
activations. This procedure can in principle be automated and is generally less
computationally costly than hyperparameter optimization based on validation set
error because it is based on feedback from the behavior of the initial model on a
single batch of data, rather than on feedback from a trained model on the validation
set. While long used heuristically, this protocol has recently been specified more
formally and studied by ( ).
Mishkin and Matas 2015
So far we have focused on the initialization of the weights. Fortunately,
initialization of other parameters is typically easier.
The approach for setting the biases must be coordinated with the approach
for settings the weights. Setting the biases to zero is compatible with most weight
initialization schemes. There are a few situations where we may set some biases to
non-zero values:
• If a bias is for an output unit, then it is often beneficial to initialize the bias to
obtain the right marginal statistics of the output. To do this, we assume that
the initial weights are small enough that the output of the unit is determined
only by the bias. This justifies setting the bias to the inverse of the activation
function applied to the marginal statistics of the output in the training set.
For example, if the output is a distribution over classes and this distribution
is a highly skewed distribution with the marginal probability of class i given
by element ci of some vector c, then we can set the bias vector b by solving
the equation softmax(b) = c. This applies not only to classifiers but also to
models we will encounter in Part , such as autoencoders and Boltzmann
III
machines. These models have layers whose output should resemble the input
data x, and it can be very helpful to initialize the biases of such layers to
match the marginal distribution over .
x
• Sometimes we may want to choose the bias to avoid causing too much
saturation at initialization. For example, we may set the bias of a ReLU
hidden unit to 0.1 rather than 0 to avoid saturating the ReLU at initialization.
This approach is not compatible with weight initialization schemes that do
not expect strong input from the biases though. For example, it is not
recommended for use with random walk initialization ( , ).
Sussillo 2014
• Sometimes a unit controls whether other units are able to participate in a
function. In such situations, we have a unit with output u and another unit
h ∈ [0, 1], and they are multiplied together to produce an output uh. We
305
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
can view h as a gate that determines whether uh u
≈ or uh ≈ 0. In these
situations, we want to set the bias for h so that h ≈ 1 most of the time at
initialization. Otherwise u does not have a chance to learn. For example,
Jozefowicz 2015
et al. ( ) advocate setting the bias to for the forget gate of
1
the LSTM model, described in section .
10.10
Another common type of parameter is a variance or precision parameter. For
example, we can perform linear regression with a conditional variance estimate
using the model
p y y
( | N
x) = ( | wT
x + 1 )
b, /β (8.24)
where β is a precision parameter. We can usually initialize variance or precision
parameters to 1 safely. Another approach is to assume the initial weights are close
enough to zero that the biases may be set while ignoring the effect of the weights,
then set the biases to produce the correct marginal mean of the output, and set
the variance parameters to the marginal variance of the output in the training set.
Besides these simple constant or random methods of initializing model parame-
ters, it is possible to initialize model parameters using machine learning. A common
strategy discussed in part of this book is to initialize a supervised model with
III
the parameters learned by an unsupervised model trained on the same inputs.
One can also perform supervised training on a related task. Even performing
supervised training on an unrelated task can sometimes yield an initialization that
offers faster convergence than a random initialization. Some of these initialization
strategies may yield faster convergence and better generalization because they
encode information about the distribution in the initial parameters of the model.
Others apparently perform well primarily because they set the parameters to have
the right scale or set different units to compute different functions from each other.
8.5 Algorithms with Adaptive Learning Rates
Neural network researchers have long realized that the learning rate was reliably one
of the hyperparameters that is the most difficult to set because it has a significant
impact on model performance. As we have discussed in sections and , the
4.3 8.2
cost is often highly sensitive to some directions in parameter space and insensitive
to others. The momentum algorithm can mitigate these issues somewhat, but
does so at the expense of introducing another hyperparameter. In the face of this,
it is natural to ask if there is another way. If we believe that the directions of
sensitivity are somewhat axis-aligned, it can make sense to use a separate learning
306
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
rate for each parameter, and automatically adapt these learning rates throughout
the course of learning.
The algorithm ( , ) is an early heuristic approach
delta-bar-delta Jacobs 1988
to adapting individual learning rates for model parameters during training. The
approach is based on a simple idea: if the partial derivative of the loss, with respect
to a given model parameter, remains the same sign, then the learning rate should
increase. If the partial derivative with respect to that parameter changes sign,
then the learning rate should decrease. Of course, this kind of rule can only be
applied to full batch optimization.
More recently, a number of incremental (or mini-batch-based) methods have
been introduced that adapt the learning rates of model parameters. This section
will briefly review a few of these algorithms.
8.5.1 AdaGrad
The AdaGrad algorithm, shown in algorithm , individually adapts the learning
8.4
rates of all model parameters by scaling them inversely proportional to the square
root of the sum of all of their historical squared values ( , ). The
Duchi et al. 2011
parameters with the largest partial derivative of the loss have a correspondingly
rapid decrease in their learning rate, while parameters with small partial derivatives
have a relatively small decrease in their learning rate. The net effect is greater
progress in the more gently sloped directions of parameter space.
In the context of convex optimization, the AdaGrad algorithm enjoys some
desirable theoretical properties. However, empirically it has been found that—for
training deep neural network models—the accumulation of squared gradients from
the beginning of training can result in a premature and excessive decrease in the
effective learning rate. AdaGrad performs well for some but not all deep learning
models.
8.5.2 RMSProp
The RMSProp algorithm ( , ) modifies AdaGrad to perform better in
Hinton 2012
the non-convex setting by changing the gradient accumulation into an exponentially
weighted moving average. AdaGrad is designed to converge rapidly when applied
to a convex function. When applied to a non-convex function to train a neural
network, the learning trajectory may pass through many different structures and
eventually arrive at a region that is a locally convex bowl. AdaGrad shrinks the
learning rate according to the entire history of the squared gradient and may
307
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.4 The AdaGrad algorithm
Require: Global learning rate 
Require: Initial parameter θ
Require: Small constant , perhaps
δ 10−7, for numerical stability
Initialize gradient accumulation variable r = 0
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1)
, . . . , x( )
m
} with
corresponding targets y( )
i
.
Compute gradient: g ← 1
m∇θ

i L f
( (x( )
i
; )
θ , y( )
i
)
Accumulate squared gradient: r r g g
← + 
Compute update: ∆θ ← − 
δ+
√
r  g. (Division and square root applied
element-wise)
Apply update: θ θ θ
← + ∆
end while
have made the learning rate too small before arriving at such a convex structure.
RMSProp uses an exponentially decaying average to discard history from the
extreme past so that it can converge rapidly after finding a convex bowl, as if it
were an instance of the AdaGrad algorithm initialized within that bowl.
RMSProp is shown in its standard form in algorithm and combined with
8.5
Nesterov momentum in algorithm . Compared to AdaGrad, the use of the
8.6
moving average introduces a new hyperparameter, ρ, that controls the length scale
of the moving average.
Empirically, RMSProp has been shown to be an effective and practical op-
timization algorithm for deep neural networks. It is currently one of the go-to
optimization methods being employed routinely by deep learning practitioners.
8.5.3 Adam
Adam ( , ) is yet another adaptive learning rate optimization
Kingma and Ba 2014
algorithm and is presented in algorithm . The name “Adam” derives from
8.7
the phrase “adaptive moments.” In the context of the earlier algorithms, it is
perhaps best seen as a variant on the combination of RMSProp and momentum
with a few important distinctions. First, in Adam, momentum is incorporated
directly as an estimate of the first order moment (with exponential weighting) of
the gradient. The most straightforward way to add momentum to RMSProp is to
apply momentum to the rescaled gradients. The use of momentum in combination
with rescaling does not have a clear theoretical motivation. Second, Adam includes
308
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.5 The RMSProp algorithm
Require: Global learning rate , decay rate .
 ρ
Require: Initial parameter θ
Require: Small constant δ, usually 10−6, used to stabilize division by small
numbers.
Initialize accumulation variables r = 0
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1)
, . . . , x( )
m
} with
corresponding targets y( )
i
.
Compute gradient: g ← 1
m∇θ

i L f
( (x( )
i ; )
θ , y( )
i )
Accumulate squared gradient: r r g g
← ρ + (1 )
− ρ 
Compute parameter update: ∆θ = − 
√
δ+r
 g. ( 1
√
δ+r
applied element-wise)
Apply update: θ θ θ
← + ∆
end while
bias corrections to the estimates of both the first-order moments (the momentum
term) and the (uncentered) second-order moments to account for their initialization
at the origin (see algorithm ). RMSProp also incorporates an estimate of the
8.7
(uncentered) second-order moment, however it lacks the correction factor. Thus,
unlike in Adam, the RMSProp second-order moment estimate may have high bias
early in training. Adam is generally regarded as being fairly robust to the choice
of hyperparameters, though the learning rate sometimes needs to be changed from
the suggested default.
8.5.4 Choosing the Right Optimization Algorithm
In this section, we discussed a series of related algorithms that each seek to address
the challenge of optimizing deep models by adapting the learning rate for each
model parameter. At this point, a natural question is: which algorithm should one
choose?
Unfortunately, there is currently no consensus on this point. ( )
Schaul et al. 2014
presented a valuable comparison of a large number of optimization algorithms
across a wide range of learning tasks. While the results suggest that the family of
algorithms with adaptive learning rates (represented by RMSProp and AdaDelta)
performed fairly robustly, no single best algorithm has emerged.
Currently, the most popular optimization algorithms actively in use include
SGD, SGD with momentum, RMSProp, RMSProp with momentum, AdaDelta
and Adam. The choice of which algorithm to use, at this point, seems to depend
309
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.6 RMSProp algorithm with Nesterov momentum
Require: Global learning rate , decay rate , momentum coefficient .
 ρ α
Require: Initial parameter , initial velocity .
θ v
Initialize accumulation variable r = 0
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1)
, . . . , x( )
m } with
corresponding targets y( )
i
.
Compute interim update: θ̃ θ v
← + α
Compute gradient: g ← 1
m∇θ̃

i L f
( (x( )
i
; ˜
θ y
), ( )
i
)
Accumulate gradient: r r g g
← ρ + (1 )
− ρ 
Compute velocity update: v v
← α − 
√
r  g. ( 1
√
r applied element-wise)
Apply update: θ θ v
← +
end while
largely on the user’s familiarity with the algorithm (for ease of hyperparameter
tuning).
8.6 Approximate Second-Order Methods
In this section we discuss the application of second-order methods to the training
of deep networks. See ( ) for an earlier treatment of this subject.
LeCun et al. 1998a
For simplicity of exposition, the only objective function we examine is the empirical
risk:
J( ) =
θ Ex,y∼p̂data ( )
x,y [ ( ( ; ) )] =
L f x θ , y
1
m
m

i=1
L f
( (x( )
i
; )
θ , y( )
i
). (8.25)
However the methods we discuss here extend readily to more general objective
functions that, for instance, include parameter regularization terms such as those
discussed in chapter .
7
8.6.1 Newton’s Method
In section , we introduced second-order gradient methods. In contrast to first-
4.3
order methods, second-order methods make use of second derivatives to improve
optimization. The most widely used second-order method is Newton’s method. We
now describe Newton’s method in more detail, with emphasis on its application to
neural network training.
310
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.7 The Adam algorithm
Require: Step size (Suggested default: )
 0 001
.
Require: Exponential decay rates for moment estimates, ρ1 and ρ2 in [0, 1).
(Suggested defaults: and respectively)
0 9
. 0 999
.
Require: Small constant δ used for numerical stabilization. (Suggested default:
10−8)
Require: Initial parameters θ
Initialize 1st and 2nd moment variables ,
s = 0 r = 0
Initialize time step t = 0
while do
stopping criterion not met
Sample a minibatch of m examples from the training set {x(1)
, . . . , x( )
m
} with
corresponding targets y( )
i .
Compute gradient: g ← 1
m∇θ

i L f
( (x( )
i ; )
θ , y( )
i )
t t
← + 1
Update biased first moment estimate: s ← ρ1s + (1 − ρ1)g
Update biased second moment estimate: r ← ρ2r + (1 − ρ2)g g

Correct bias in first moment: ŝ ← s
1−ρt
1
Correct bias in second moment: r̂ ← r
1−ρt
2
Compute update: ∆ =
θ − ŝ
√
r̂+δ
(operations applied element-wise)
Apply update: θ θ θ
← + ∆
end while
Newton’s method is an optimization scheme based on using a second-order Tay-
lor series expansion to approximate J (θ) near some point θ0, ignoring derivatives
of higher order:
J J
( )
θ ≈ (θ0) + (θ θ
− 0)
∇θ J(θ0) +
1
2
(θ θ
− 0)
H θ θ
( − 0), (8.26)
where H is the Hessian of J with respect to θ evaluated at θ0. If we then solve for
the critical point of this function, we obtain the Newton parameter update rule:
θ∗
= θ0 − H−1
∇θJ(θ0) (8.27)
Thus for a locally quadratic function (with positive definite H), by rescaling
the gradient by H −1
, Newton’s method jumps directly to the minimum. If the
objective function is convex but not quadratic (there are higher-order terms), this
update can be iterated, yielding the training algorithm associated with Newton’s
method, given in algorithm .
8.8
311
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.8 Newton’s method with objective J(θ) =
1
m
m
i=1 L f
( (x( )
i
; )
θ , y( )
i
).
Require: Initial parameter θ0
Require: Training set of examples
m
while do
stopping criterion not met
Compute gradient: g ← 1
m∇θ

i L f
( (x( )
i
; )
θ , y( )
i
)
Compute Hessian: H ← 1
m ∇2
θ

i L f
( (x( )
i ; )
θ , y( )
i )
Compute Hessian inverse: H−1
Compute update: ∆ =
θ −H−1
g
Apply update: θ θ θ
= + ∆
end while
For surfaces that are not quadratic, as long as the Hessian remains positive
definite, Newton’s method can be applied iteratively. This implies a two-step
iterative procedure. First, update or compute the inverse Hessian (i.e. by updat-
ing the quadratic approximation). Second, update the parameters according to
equation .
8.27
In section , we discussed how Newton’s method is appropriate only when
8.2.3
the Hessian is positive definite. In deep learning, the surface of the objective
function is typically non-convex with many features, such as saddle points, that
are problematic for Newton’s method. If the eigenvalues of the Hessian are not
all positive, for example, near a saddle point, then Newton’s method can actually
cause updates to move in the wrong direction. This situation can be avoided
by regularizing the Hessian. Common regularization strategies include adding a
constant, , along the diagonal of the Hessian. The regularized update becomes
α
θ∗
= θ0 − [ ( (
H f θ0)) + ]
αI −1
∇θ f(θ0). (8.28)
This regularization strategy is used in approximations to Newton’s method, such
as the Levenberg–Marquardt algorithm (Levenberg 1944 Marquardt 1963
, ; , ), and
works fairly well as long as the negative eigenvalues of the Hessian are still relatively
close to zero. In cases where there are more extreme directions of curvature, the
value of α would have to be sufficiently large to offset the negative eigenvalues.
However, as α increases in size, the Hessian becomes dominated by the αI diagonal
and the direction chosen by Newton’s method converges to the standard gradient
divided by α. When strong negative curvature is present, α may need to be so
large that Newton’s method would make smaller steps than gradient descent with
a properly chosen learning rate.
Beyond the challenges created by certain features of the objective function,
312
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
such as saddle points, the application of Newton’s method for training large neural
networks is limited by the significant computational burden it imposes. The
number of elements in the Hessian is squared in the number of parameters, so with
k parameters (and for even very small neural networks the number of parameters
k can be in the millions), Newton’s method would require the inversion of a k k
×
matrix—with computational complexity of O(k3). Also, since the parameters will
change with every update, the inverse Hessian has to be computed at every training
iteration. As a consequence, only networks with a very small number of parameters
can be practically trained via Newton’s method. In the remainder of this section,
we will discuss alternatives that attempt to gain some of the advantages of Newton’s
method while side-stepping the computational hurdles.
8.6.2 Conjugate Gradients
Conjugate gradients is a method to efficiently avoid the calculation of the inverse
Hessian by iteratively descending conjugate directions. The inspiration for this
approach follows from a careful study of the weakness of the method of steepest
descent (see section for details), where line searches are applied iteratively in
4.3
the direction associated with the gradient. Figure illustrates how the method of
8.6
steepest descent, when applied in a quadratic bowl, progresses in a rather ineffective
back-and-forth, zig-zag pattern. This happens because each line search direction,
when given by the gradient, is guaranteed to be orthogonal to the previous line
search direction.
Let the previous search direction be dt−1. At the minimum, where the line
search terminates, the directional derivative is zero in direction dt−1: ∇θJ(θ) ·
dt−1 = 0. Since the gradient at this point defines the current search direction,
dt = ∇θJ (θ) will have no contribution in the direction dt−1. Thus dt is orthogonal
to dt−1. This relationship between dt−1 and dt is illustrated in figure for
8.6
multiple iterations of steepest descent. As demonstrated in the figure, the choice of
orthogonal directions of descent do not preserve the minimum along the previous
search directions. This gives rise to the zig-zag pattern of progress, where by
descending to the minimum in the current gradient direction, we must re-minimize
the objective in the previous gradient direction. Thus, by following the gradient at
the end of each line search we are, in a sense, undoing progress we have already
made in the direction of the previous line search. The method of conjugate gradients
seeks to address this problem.
In the method of conjugate gradients, we seek to find a search direction that
is conjugate to the previous line search direction, i.e. it will not undo progress
made in that direction. At training iteration t, the next search direction dt takes
313
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
󰤓 󰤓 󰤓   
󰤓
󰤓
󰤓



Figure 8.6: The method of steepest descent applied to a quadratic cost surface. The
method of steepest descent involves jumping to the point of lowest cost along the line
defined by the gradient at the initial point on each step. This resolves some of the problems
seen with using a fixed learning rate in figure , but even with the optimal step size
4.6
the algorithm still makes back-and-forth progress toward the optimum. By definition, at
the minimum of the objective along a given direction, the gradient at the final point is
orthogonal to that direction.
the form:
dt = ∇θJ β
( ) +
θ tdt−1 (8.29)
where βt is a coefficient whose magnitude controls how much of the direction, dt−1,
we should add back to the current search direction.
Two directions, dt and dt−1, are defined as conjugate if d
t Hdt−1 = 0, where
H is the Hessian matrix.
The straightforward way to impose conjugacy would involve calculation of the
eigenvectors of H to choose βt, which would not satisfy our goal of developing
a method that is more computationally viable than Newton’s method for large
problems. Can we calculate the conjugate directions without resorting to these
calculations? Fortunately the answer to that is yes.
Two popular methods for computing the βt are:
1. Fletcher-Reeves:
βt =
∇θJ(θt)∇θJ(θt)
∇θJ(θt−1)∇θJ(θt−1)
(8.30)
314
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
2. Polak-Ribière:
βt =
(∇θJ(θt) − ∇θ J(θt−1))
∇θJ(θt )
∇θJ(θt−1)∇θJ(θt−1)
(8.31)
For a quadratic surface, the conjugate directions ensure that the gradient along
the previous direction does not increase in magnitude. We therefore stay at the
minimum along the previous directions. As a consequence, in a k-dimensional
parameter space, the conjugate gradient method requires at most k line searches to
achieve the minimum. The conjugate gradient algorithm is given in algorithm .
8.9
Algorithm 8.9 The conjugate gradient method
Require: Initial parameters θ0
Require: Training set of examples
m
Initialize ρ0 = 0
Initialize g0 = 0
Initialize t = 1
while do
stopping criterion not met
Initialize the gradient gt = 0
Compute gradient: gt ← 1
m∇θ

i L f
( (x( )
i ; )
θ , y( )
i )
Compute βt = (gt−gt−1) 
gt
g 
t−1gt−1
(Polak-Ribière)
(Nonlinear conjugate gradient: optionally reset βt to zero, for example if t is
a multiple of some constant , such as )
k k = 5
Compute search direction: ρt = −gt + βtρt−1
Perform line search to find: ∗ = argmin
1
m
m
i=1 L f
( (x( )
i ; θt + ρt), y( )
i )
(On a truly quadratic cost function, analytically solve for ∗ rather than
explicitly searching for it)
Apply update: θt+1 = θt + ∗ρt
t t
← + 1
end while
Nonlinear Conjugate Gradients: So far we have discussed the method of
conjugate gradients as it is applied to quadratic objective functions. Of course,
our primary interest in this chapter is to explore optimization methods for training
neural networks and other related deep learning models where the corresponding
objective function is far from quadratic. Perhaps surprisingly, the method of
conjugate gradients is still applicable in this setting, though with some modification.
Without any assurance that the objective is quadratic, the conjugate directions
315
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
are no longer assured to remain at the minimum of the objective for previous
directions. As a result, the nonlinear conjugate gradients algorithm includes
occasional resets where the method of conjugate gradients is restarted with line
search along the unaltered gradient.
Practitioners report reasonable results in applications of the nonlinear conjugate
gradients algorithm to training neural networks, though it is often beneficial to
initialize the optimization with a few iterations of stochastic gradient descent before
commencing nonlinear conjugate gradients. Also, while the (nonlinear) conjugate
gradients algorithm has traditionally been cast as a batch method, minibatch
versions have been used successfully for the training of neural networks ( ,
Le et al.
2011). Adaptations of conjugate gradients specifically for neural networks have
been proposed earlier, such as the scaled conjugate gradients algorithm ( ,
Moller
1993).
8.6.3 BFGS
The Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm attempts to
bring some of the advantages of Newton’s method without the computational
burden. In that respect, BFGS is similar to the conjugate gradient method.
However, BFGS takes a more direct approach to the approximation of Newton’s
update. Recall that Newton’s update is given by
θ∗
= θ0 − H−1
∇θ J(θ0), (8.32)
where H is the Hessian of J with respect to θ evaluated at θ0. The primary
computational difficulty in applying Newton’s update is the calculation of the
inverse Hessian H−1
. The approach adopted by quasi-Newton methods (of which
the BFGS algorithm is the most prominent) is to approximate the inverse with
a matrix Mt that is iteratively refined by low rank updates to become a better
approximation of H−1.
The specification and derivation of the BFGS approximation is given in many
textbooks on optimization, including Luenberger 1984
( ).
Once the inverse Hessian approximation Mt is updated, the direction of descent
ρt is determined by ρt = Mtgt. A line search is performed in this direction to
determine the size of the step, ∗, taken in this direction. The final update to the
parameters is given by:
θt+1 = θt + ∗
ρt . (8.33)
Like the method of conjugate gradients, the BFGS algorithm iterates a series of
line searches with the direction incorporating second-order information. However
316
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
unlike conjugate gradients, the success of the approach is not heavily dependent
on the line search finding a point very close to the true minimum along the line.
Thus, relative to conjugate gradients, BFGS has the advantage that it can spend
less time refining each line search. On the other hand, the BFGS algorithm must
store the inverse Hessian matrix, M, that requires O(n2) memory, making BFGS
impractical for most modern deep learning models that typically have millions of
parameters.
Limited Memory BFGS (or L-BFGS) The memory costs of the BFGS
algorithm can be significantly decreased by avoiding storing the complete inverse
Hessian approximationM . The L-BFGS algorithm computes the approximation M
using the same method as the BFGS algorithm, but beginning with the assumption
that M( 1)
t− is the identity matrix, rather than storing the approximation from one
step to the next. If used with exact line searches, the directions defined by L-BFGS
are mutually conjugate. However, unlike the method of conjugate gradients, this
procedure remains well behaved when the minimum of the line search is reached
only approximately. The L-BFGS strategy with no storage described here can be
generalized to include more information about the Hessian by storing some of the
vectors used to update at each time step, which costs only per step.
M O n
( )
8.7 Optimization Strategies and Meta-Algorithms
Many optimization techniques are not exactly algorithms, but rather general
templates that can be specialized to yield algorithms, or subroutines that can be
incorporated into many different algorithms.
8.7.1 Batch Normalization
Batch normalization ( , ) is one of the most exciting recent
Ioffe and Szegedy 2015
innovations in optimizing deep neural networks and it is actually not an optimization
algorithm at all. Instead, it is a method of adaptive reparametrization, motivated
by the difficulty of training very deep models.
Very deep models involve the composition of several functions or layers. The
gradient tells how to update each parameter, under the assumption that the other
layers do not change. In practice, we update all of the layers simultaneously.
When we make the update, unexpected results can happen because many functions
composed together are changed simultaneously, using updates that were computed
under the assumption that the other functions remain constant. As a simple
317
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
example, suppose we have a deep neural network that has only one unit per layer
and does not use an activation function at each hidden layer: ŷ = xw1w2w3 . . . wl.
Here, wi provides the weight used by layer i. The output of layer i is hi = hi−1wi.
The output ŷ is a linear function of the input x, but a nonlinear function of the
weights wi. Suppose our cost function has put a gradient of on
1 ŷ, so we wish to
decrease ŷ slightly. The back-propagation algorithm can then compute a gradient
g = ∇wŷ. Consider what happens when we make an update w w g
← −  . The
first-order Taylor series approximation ofŷ predicts that the value of ŷ will decrease
by gg. If we wanted to decrease ŷ by .1, this first-order information available in
the gradient suggests we could set the learning rate  to .1
g g
. However, the actual
update will include second-order and third-order effects, on up to effects of order l.
The new value of ŷ is given by
x w
( 1 − g1)(w2 − g2) (
. . . wl − gl ). (8.34)
An example of one second-order term arising from this update is 2
g1 g2
l
i=3 wi.
This term might be negligible if
l
i=3wi is small, or might be exponentially large
if the weights on layers through
3 l are greater than . This makes it very hard
1
to choose an appropriate learning rate, because the effects of an update to the
parameters for one layer depends so strongly on all of the other layers. Second-order
optimization algorithms address this issue by computing an update that takes these
second-order interactions into account, but we can see that in very deep networks,
even higher-order interactions can be significant. Even second-order optimization
algorithms are expensive and usually require numerous approximations that prevent
them from truly accounting for all significant second-order interactions. Building
an n-th order optimization algorithm for n > 2 thus seems hopeless. What can we
do instead?
Batch normalization provides an elegant way of reparametrizing almost any deep
network. The reparametrization significantly reduces the problem of coordinating
updates across many layers. Batch normalization can be applied to any input
or hidden layer in a network. Let H be a minibatch of activations of the layer
to normalize, arranged as a design matrix, with the activations for each example
appearing in a row of the matrix. To normalize , we replace it with
H
H
=
H µ
−
σ
, (8.35)
where µ is a vector containing the mean of each unit and σ is a vector containing
the standard deviation of each unit. The arithmetic here is based on broadcasting
the vector µ and the vector σ to be applied to every row of the matrix H . Within
each row, the arithmetic is element-wise, so Hi,j is normalized by subtracting µj
318
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
and dividing by σj. The rest of the network then operates on H
in exactly the
same way that the original network operated on .
H
At training time,
µ =
1
m

i
Hi,: (8.36)
and
σ =

δ +
1
m

i
( )
H µ
− 2
i , (8.37)
where δ is a small positive value such as 10−8
imposed to avoid encountering
the undefined gradient of
√
z at z = 0. Crucially, we back-propagate through
these operations for computing the mean and the standard deviation, and for
applying them to normalize H. This means that the gradient will never propose
an operation that acts simply to increase the standard deviation or mean of
hi; the normalization operations remove the effect of such an action and zero
out its component in the gradient. This was a major innovation of the batch
normalization approach. Previous approaches had involved adding penalties to
the cost function to encourage units to have normalized activation statistics or
involved intervening to renormalize unit statistics after each gradient descent step.
The former approach usually resulted in imperfect normalization and the latter
usually resulted in significant wasted time as the learning algorithm repeatedly
proposed changing the mean and variance and the normalization step repeatedly
undid this change. Batch normalization reparametrizes the model to make some
units always be standardized by definition, deftly sidestepping both problems.
At test time, µ and σ may be replaced by running averages that were collected
during training time. This allows the model to be evaluated on a single example,
without needing to use definitions of µ and σ that depend on an entire minibatch.
Revisiting the ŷ = xw1w2 . . . wl example, we see that we can mostly resolve the
difficulties in learning this model by normalizing hl−1. Suppose that x is drawn
from a unit Gaussian. Then hl−1 will also come from a Gaussian, because the
transformation from x to hl is linear. However, hl−1 will no longer have zero mean
and unit variance. After applying batch normalization, we obtain the normalized
ĥl−1 that restores the zero mean and unit variance properties. For almost any
update to the lower layers, ĥl−1 will remain a unit Gaussian. The output ŷ may
then be learned as a simple linear function ŷ = wlĥl−1. Learning in this model is
now very simple because the parameters at the lower layers simply do not have an
effect in most cases; their output is always renormalized to a unit Gaussian. In
some corner cases, the lower layers can have an effect. Changing one of the lower
layer weights to can make the output become degenerate, and changing the sign
0
319
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
of one of the lower weights can flip the relationship between ĥl−1 and y. These
situations are very rare. Without normalization, nearly every update would have
an extreme effect on the statistics of hl−1. Batch normalization has thus made
this model significantly easier to learn. In this example, the ease of learning of
course came at the cost of making the lower layers useless. In our linear example,
the lower layers no longer have any harmful effect, but they also no longer have
any beneficial effect. This is because we have normalized out the first and second
order statistics, which is all that a linear network can influence. In a deep neural
network with nonlinear activation functions, the lower layers can perform nonlinear
transformations of the data, so they remain useful. Batch normalization acts to
standardize only the mean and variance of each unit in order to stabilize learning,
but allows the relationships between units and the nonlinear statistics of a single
unit to change.
Because the final layer of the network is able to learn a linear transformation,
we may actually wish to remove all linear relationships between units within a
layer. Indeed, this is the approach taken by ( ), who provided
Desjardins et al. 2015
the inspiration for batch normalization. Unfortunately, eliminating all linear
interactions is much more expensive than standardizing the mean and standard
deviation of each individual unit, and so far batch normalization remains the most
practical approach.
Normalizing the mean and standard deviation of a unit can reduce the expressive
power of the neural network containing that unit. In order to maintain the
expressive power of the network, it is common to replace the batch of hidden unit
activations H with γH +β rather than simply the normalized H. The variables
γ and β are learned parameters that allow the new variable to have any mean
and standard deviation. At first glance, this may seem useless—why did we set
the mean to 0, and then introduce a parameter that allows it to be set back to
any arbitrary value β? The answer is that the new parametrization can represent
the same family of functions of the input as the old parametrization, but the new
parametrization has different learning dynamics. In the old parametrization, the
mean of H was determined by a complicated interaction between the parameters
in the layers below H. In the new parametrization, the mean of γH
+ β is
determined solely by β. The new parametrization is much easier to learn with
gradient descent.
Most neural network layers take the form of φ(XW + b) where φ is some
fixed nonlinear activation function such as the rectified linear transformation. It
is natural to wonder whether we should apply batch normalization to the input
X, or to the transformed value XW + b. ( ) recommend
Ioffe and Szegedy 2015
320
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
the latter. More specifically, XW + b should be replaced by a normalized version
of XW . The bias term should be omitted because it becomes redundant with
the β parameter applied by the batch normalization reparametrization. The input
to a layer is usually the output of a nonlinear activation function such as the
rectified linear function in a previous layer. The statistics of the input are thus
more non-Gaussian and less amenable to standardization by linear operations.
In convolutional networks, described in chapter , it is important to apply the
9
same normalizing µ and σ at every spatial location within a feature map, so that
the statistics of the feature map remain the same regardless of spatial location.
8.7.2 Coordinate Descent
In some cases, it may be possible to solve an optimization problem quickly by
breaking it into separate pieces. If we minimize f(x) with respect to a single
variable xi, then minimize it with respect to another variable xj and so on,
repeatedly cycling through all variables, we are guaranteed to arrive at a (local)
minimum. This practice is known as coordinate descent, because we optimize
one coordinate at a time. More generally, block coordinate descent refers to
minimizing with respect to a subset of the variables simultaneously. The term
“coordinate descent” is often used to refer to block coordinate descent as well as
the strictly individual coordinate descent.
Coordinate descent makes the most sense when the different variables in the
optimization problem can be clearly separated into groups that play relatively
isolated roles, or when optimization with respect to one group of variables is
significantly more efficient than optimization with respect to all of the variables.
For example, consider the cost function
J ,
(H W ) =

i,j
|Hi,j| +

i,j

X W
− 
H
2
i,j
. (8.38)
This function describes a learning problem called sparse coding, where the goal is
to find a weight matrix W that can linearly decode a matrix of activation values
H to reconstruct the training set X. Most applications of sparse coding also
involve weight decay or a constraint on the norms of the columns of W, in order
to prevent the pathological solution with extremely small and large .
H W
The function J is not convex. However, we can divide the inputs to the
training algorithm into two sets: the dictionary parameters W and the code
representations H . Minimizing the objective function with respect to either one of
these sets of variables is a convex problem. Block coordinate descent thus gives
321
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
us an optimization strategy that allows us to use efficient convex optimization
algorithms, by alternating between optimizing W with H fixed, then optimizing
H W
with fixed.
Coordinate descent is not a very good strategy when the value of one variable
strongly influences the optimal value of another variable, as in the function f(x) =
(x1 − x2 )2
+ α

x2
1 + x2
2

where α is a positive constant. The first term encourages
the two variables to have similar value, while the second term encourages them
to be near zero. The solution is to set both to zero. Newton’s method can solve
the problem in a single step because it is a positive definite quadratic problem.
However, for small α, coordinate descent will make very slow progress because the
first term does not allow a single variable to be changed to a value that differs
significantly from the current value of the other variable.
8.7.3 Polyak Averaging
Polyak averaging (Polyak and Juditsky 1992
, ) consists of averaging together several
points in the trajectory through parameter space visited by an optimization
algorithm. If t iterations of gradient descent visit points θ(1), . . . , θ( )
t , then the
output of the Polyak averaging algorithm is ˆ
θ( )
t
= 1
t

i θ( )
i . On some problem
classes, such as gradient descent applied to convex problems, this approach has
strong convergence guarantees. When applied to neural networks, its justification
is more heuristic, but it performs well in practice. The basic idea is that the
optimization algorithm may leap back and forth across a valley several times
without ever visiting a point near the bottom of the valley. The average of all of
the locations on either side should be close to the bottom of the valley though.
In non-convex problems, the path taken by the optimization trajectory can be
very complicated and visit many different regions. Including points in parameter
space from the distant past that may be separated from the current point by large
barriers in the cost function does not seem like a useful behavior. As a result,
when applying Polyak averaging to non-convex problems, it is typical to use an
exponentially decaying running average:
θ̂( )
t
= αθ̂( 1)
t−
+ (1 )
− α θ( )
t
. (8.39)
The running average approach is used in numerous applications. See Szegedy
et al. ( ) for a recent example.
2015
322
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.7.4 Supervised Pretraining
Sometimes, directly training a model to solve a specific task can be too ambitious
if the model is complex and hard to optimize or if the task is very difficult. It is
sometimes more effective to train a simpler model to solve the task, then make
the model more complex. It can also be more effective to train the model to solve
a simpler task, then move on to confront the final task. These strategies that
involve training simple models on simple tasks before confronting the challenge of
training the desired model to perform the desired task are collectively known as
pretraining.
Greedy algorithms break a problem into many components, then solve for
the optimal version of each component in isolation. Unfortunately, combining the
individually optimal components is not guaranteed to yield an optimal complete
solution. However, greedy algorithms can be computationally much cheaper than
algorithms that solve for the best joint solution, and the quality of a greedy solution
is often acceptable if not optimal. Greedy algorithms may also be followed by a
fine-tuning stage in which a joint optimization algorithm searches for an optimal
solution to the full problem. Initializing the joint optimization algorithm with a
greedy solution can greatly speed it up and improve the quality of the solution it
finds.
Pretraining, and especially greedy pretraining, algorithms are ubiquitous in
deep learning. In this section, we describe specifically those pretraining algorithms
that break supervised learning problems into other simpler supervised learning
problems. This approach is known as .
greedy supervised pretraining
In the original ( , ) version of greedy supervised pretraining,
Bengio et al. 2007
each stage consists of a supervised learning training task involving only a subset of
the layers in the final neural network. An example of greedy supervised pretraining
is illustrated in figure , in which each added hidden layer is pretrained as part
8.7
of a shallow supervised MLP, taking as input the output of the previously trained
hidden layer. Instead of pretraining one layer at a time, Simonyan and Zisserman
( ) pretrain a deep convolutional network (eleven weight layers) and then use
2015
the first four and last three layers from this network to initialize even deeper
networks (with up to nineteen layers of weights). The middle layers of the new,
very deep network are initialized randomly. The new network is then jointly trained.
Another option, explored by Yu 2010
et al. ( ) is to use the of the previously
outputs
trained MLPs, as well as the raw input, as inputs for each added stage.
Why would greedy supervised pretraining help? The hypothesis initially
discussed by ( ) is that it helps to provide better guidance to the
Bengio et al. 2007
323
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
y
y
h(1)
h(1)
x
x
(a)
U(1)
U(1)
W(1)
W(1) y
y
h(1)
h(1)
x
x
(b)
U(1)
U(1)
W(1)
W(1)
y
y
h(1)
h(1)
x
x
(c)
U(1)
U(1)
W(1)
W(1)
h(2)
h(2)
y
y
U(2)
U(2)
W(2)
W(2)
y
y
h(1)
h(1)
x
x
(d)
U(1)
U(1)
W(1)
W(1)
h(2)
h(2)
y
U(2)
U(2)
W(2)
W(2)
Figure 8.7: Illustration of one form of greedy supervised pretraining ( , ).
Bengio et al. 2007
(a)We start by training a sufficiently shallow architecture. Another drawing of the
(b)
same architecture. We keep only the input-to-hidden layer of the original network and
(c)
discard the hidden-to-output layer. We send the output of the first hidden layer as input
to another supervised single hidden layer MLP that is trained with the same objective
as the first network was, thus adding a second hidden layer. This can be repeated for as
many layers as desired. Another drawing of the result, viewed as a feedforward network.
(d)
To further improve the optimization, we can jointly fine-tune all the layers, either only at
the end or at each stage of this process.
324
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
intermediate levels of a deep hierarchy. In general, pretraining may help both in
terms of optimization and in terms of generalization.
An approach related to supervised pretraining extends the idea to the context
of transfer learning: Yosinski 2014
et al. ( ) pretrain a deep convolutional net with 8
layers of weights on a set of tasks (a subset of the 1000 ImageNet object categories)
and then initialize a same-size network with the first k layers of the first net. All
the layers of the second network (with the upper layers initialized randomly) are
then jointly trained to perform a different set of tasks (another subset of the 1000
ImageNet object categories), with fewer training examples than for the first set of
tasks. Other approaches to transfer learning with neural networks are discussed in
section .
15.2
Another related line of work is the FitNets ( , ) approach.
Romero et al. 2015
This approach begins by training a network that has low enough depth and great
enough width (number of units per layer) to be easy to train. This network then
becomes a teacher for a second network, designated the student. The student
network is much deeper and thinner (eleven to nineteen layers) and would be
difficult to train with SGD under normal circumstances. The training of the
student network is made easier by training the student network not only to predict
the output for the original task, but also to predict the value of the middle layer
of the teacher network. This extra task provides a set of hints about how the
hidden layers should be used and can simplify the optimization problem. Additional
parameters are introduced to regress the middle layer of the 5-layer teacher network
from the middle layer of the deeper student network. However, instead of predicting
the final classification target, the objective is to predict the middle hidden layer
of the teacher network. The lower layers of the student networks thus have two
objectives: to help the outputs of the student network accomplish their task, as
well as to predict the intermediate layer of the teacher network. Although a thin
and deep network appears to be more difficult to train than a wide and shallow
network, the thin and deep network may generalize better and certainly has lower
computational cost if it is thin enough to have far fewer parameters. Without
the hints on the hidden layer, the student network performs very poorly in the
experiments, both on the training and test set. Hints on middle layers may thus
be one of the tools to help train neural networks that otherwise seem difficult to
train, but other optimization techniques or changes in the architecture may also
solve the problem.
325
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.7.5 Designing Models to Aid Optimization
To improve optimization, the best strategy is not always to improve the optimization
algorithm. Instead, many improvements in the optimization of deep models have
come from designing the models to be easier to optimize.
In principle, we could use activation functions that increase and decrease in
jagged non-monotonic patterns. However, this would make optimization extremely
difficult. In practice, it is more important to choose a model family that is easy to
optimize than to use a powerful optimization algorithm. Most of the advances in
neural network learning over the past 30 years have been obtained by changing
the model family rather than changing the optimization procedure. Stochastic
gradient descent with momentum, which was used to train neural networks in the
1980s, remains in use in modern state of the art neural network applications.
Specifically, modern neural networks reflect a design choice to use linear trans-
formations between layers and activation functions that are differentiable almost
everywhere and have significant slope in large portions of their domain. In par-
ticular, model innovations like the LSTM, rectified linear units and maxout units
have all moved toward using more linear functions than previous models like deep
networks based on sigmoidal units. These models have nice properties that make
optimization easier. The gradient flows through many layers provided that the
Jacobian of the linear transformation has reasonable singular values. Moreover,
linear functions consistently increase in a single direction, so even if the model’s
output is very far from correct, it is clear simply from computing the gradient
which direction its output should move to reduce the loss function. In other words,
modern neural nets have been designed so that their local gradient information
corresponds reasonably well to moving toward a distant solution.
Other model design strategies can help to make optimization easier. For
example, linear paths or skip connections between layers reduce the length of
the shortest path from the lower layer’s parameters to the output, and thus
mitigate the vanishing gradient problem (Srivastava 2015
et al., ). A related idea
to skip connections is adding extra copies of the output that are attached to the
intermediate hidden layers of the network, as in GoogLeNet ( , )
Szegedy et al. 2014a
and deeply-supervised nets ( , ). These “auxiliary heads” are trained
Lee et al. 2014
to perform the same task as the primary output at the top of the network in order
to ensure that the lower layers receive a large gradient. When training is complete
the auxiliary heads may be discarded. This is an alternative to the pretraining
strategies, which were introduced in the previous section. In this way, one can
train jointly all the layers in a single phase but change the architecture, so that
intermediate layers (especially the lower ones) can get some hints about what they
326
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
should do, via a shorter path. These hints provide an error signal to lower layers.
8.7.6 Continuation Methods and Curriculum Learning
As argued in section , many of the challenges in optimization arise from the
8.2.7
global structure of the cost function and cannot be resolved merely by making better
estimates of local update directions. The predominant strategy for overcoming this
problem is to attempt to initialize the parameters in a region that is connected
to the solution by a short path through parameter space that local descent can
discover.
Continuation methods are a family of strategies that can make optimization
easier by choosing initial points to ensure that local optimization spends most of
its time in well-behaved regions of space. The idea behind continuation methods is
to construct a series of objective functions over the same parameters. In order to
minimize a cost function J(θ), we will construct new cost functions {J(0), . . . , J( )
n }.
These cost functions are designed to be increasingly difficult, with J(0) being fairly
easy to minimize, and J( )
n , the most difficult, being J(θ), the true cost function
motivating the entire process. When we say that J( )
i is easier than J ( +1)
i , we
mean that it is well behaved over more of θ space. A random initialization is more
likely to land in the region where local descent can minimize the cost function
successfully because this region is larger. The series of cost functions are designed
so that a solution to one is a good initial point of the next. We thus begin by
solving an easy problem then refine the solution to solve incrementally harder
problems until we arrive at a solution to the true underlying problem.
Traditional continuation methods (predating the use of continuation methods
for neural network training) are usually based on smoothing the objective function.
See Wu 1997
( ) for an example of such a method and a review of some related
methods. Continuation methods are also closely related to simulated annealing,
which adds noise to the parameters (Kirkpatrick 1983
et al., ). Continuation
methods have been extremely successful in recent years. See Mobahi and Fisher
( ) for an overview of recent literature, especially for AI applications.
2015
Continuation methods traditionally were mostly designed with the goal of
overcoming the challenge of local minima. Specifically, they were designed to
reach a global minimum despite the presence of many local minima. To do so,
these continuation methods would construct easier cost functions by “blurring” the
original cost function. This blurring operation can be done by approximating
J( )
i
( ) =
θ Eθ∼N(θ;θ,σ( )2
i )J(θ
) (8.40)
via sampling. The intuition for this approach is that some non-convex functions
327
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
become approximately convex when blurred. In many cases, this blurring preserves
enough information about the location of a global minimum that we can find the
global minimum by solving progressively less blurred versions of the problem. This
approach can break down in three different ways. First, it might successfully define
a series of cost functions where the first is convex and the optimum tracks from
one function to the next arriving at the global minimum, but it might require so
many incremental cost functions that the cost of the entire procedure remains high.
NP-hard optimization problems remain NP-hard, even when continuation methods
are applicable. The other two ways that continuation methods fail both correspond
to the method not being applicable. First, the function might not become convex,
no matter how much it is blurred. Consider for example the function J(θ) = −θθ.
Second, the function may become convex as a result of blurring, but the minimum
of this blurred function may track to a local rather than a global minimum of the
original cost function.
Though continuation methods were mostly originally designed to deal with the
problem of local minima, local minima are no longer believed to be the primary
problem for neural network optimization. Fortunately, continuation methods can
still help. The easier objective functions introduced by the continuation method can
eliminate flat regions, decrease variance in gradient estimates, improve conditioning
of the Hessian matrix, or do anything else that will either make local updates
easier to compute or improve the correspondence between local update directions
and progress toward a global solution.
Bengio 2009
et al. ( ) observed that an approach called curriculum learning
or shaping can be interpreted as a continuation method. Curriculum learning is
based on the idea of planning a learning process to begin by learning simple concepts
and progress to learning more complex concepts that depend on these simpler
concepts. This basic strategy was previously known to accelerate progress in animal
training ( , ; , ;
Skinner 1958 Peterson 2004 Krueger and Dayan 2009
, ) and machine
learning ( , ; , ; , ). ( )
Solomonoff 1989 Elman 1993 Sanger 1994 Bengio et al. 2009
justified this strategy as a continuation method, where earlier J( )
i
are made easier by
increasing the influence of simpler examples (either by assigning their contributions
to the cost function larger coefficients, or by sampling them more frequently), and
experimentally demonstrated that better results could be obtained by following a
curriculum on a large-scale neural language modeling task. Curriculum learning
has been successful on a wide range of natural language (Spitkovsky 2010
et al., ;
Collobert 2011a Mikolov 2011b Tu and Honavar 2011
et al., ; et al., ; , ) and computer
vision ( , ; , ; , )
Kumar et al. 2010 Lee and Grauman 2011 Supancic and Ramanan 2013
tasks. Curriculum learning was also verified as being consistent with the way in
which humans teach ( , ): teachers start by showing easier and
Khan et al. 2011
328
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
more prototypical examples and then help the learner refine the decision surface
with the less obvious cases. Curriculum-based strategies are more effective for
teaching humans than strategies based on uniform sampling of examples, and can
also increase the effectiveness of other teaching strategies ( ,
Basu and Christensen
2013).
Another important contribution to research on curriculum learning arose in the
context of training recurrent neural networks to capture long-term dependencies:
Zaremba and Sutskever 2014
( ) found that much better results were obtained with a
stochastic curriculum, in which a random mix of easy and difficult examples is always
presented to the learner, but where the average proportion of the more difficult
examples (here, those with longer-term dependencies) is gradually increased. With
a deterministic curriculum, no improvement over the baseline (ordinary training
from the full training set) was observed.
We have now described the basic family of neural network models and how to
regularize and optimize them. In the chapters ahead, we turn to specializations of
the neural network family, that allow neural networks to scale to very large sizes and
process input data that has special structure. The optimization methods discussed
in this chapter are often directly applicable to these specialized architectures with
little or no modification.
329
Chapter 9
Convolutional Networks
Convolutional networks ( , ), also known as
LeCun 1989 convolutional neural
networks or CNNs, are a specialized kind of neural network for processing data
that has a known, grid-like topology. Examples include time-series data, which can
be thought of as a 1D grid taking samples at regular time intervals, and image data,
which can be thought of as a 2D grid of pixels. Convolutional networks have been
tremendously successful in practical applications. The name “convolutional neural
network” indicates that the network employs a mathematical operation called
convolution. Convolution is a specialized kind of linear operation. Convolutional
networks are simply neural networks that use convolution in place of general matrix
multiplication in at least one of their layers.
In this chapter, we will first describe what convolution is. Next, we will
explain the motivation behind using convolution in a neural network. We will then
describe an operation called pooling, which almost all convolutional networks
employ. Usually, the operation used in a convolutional neural network does not
correspond precisely to the definition of convolution as used in other fields such
as engineering or pure mathematics. We will describe several variants on the
convolution function that are widely used in practice for neural networks. We
will also show how convolution may be applied to many kinds of data, with
different numbers of dimensions. We then discuss means of making convolution
more efficient. Convolutional networks stand out as an example of neuroscientific
principles influencing deep learning. We will discuss these neuroscientific principles,
then conclude with comments about the role convolutional networks have played
in the history of deep learning. One topic this chapter does not address is how to
choose the architecture of your convolutional network. The goal of this chapter is
to describe the kinds of tools that convolutional networks provide, while chapter 11
330
CHAPTER 9. CONVOLUTIONAL NETWORKS
describes general guidelines for choosing which tools to use in which circumstances.
Research into convolutional network architectures proceeds so rapidly that a new
best architecture for a given benchmark is announced every few weeks to months,
rendering it impractical to describe the best architecture in print. However, the
best architectures have consistently been composed of the building blocks described
here.
9.1 The Convolution Operation
In its most general form, convolution is an operation on two functions of a real-
valued argument. To motivate the definition of convolution, we start with examples
of two functions we might use.
Suppose we are tracking the location of a spaceship with a laser sensor. Our
laser sensor provides a single output x(t), the position of the spaceship at time
t. Both x and t are real-valued, i.e., we can get a different reading from the laser
sensor at any instant in time.
Now suppose that our laser sensor is somewhat noisy. To obtain a less noisy
estimate of the spaceship’s position, we would like to average together several
measurements. Of course, more recent measurements are more relevant, so we will
want this to be a weighted average that gives more weight to recent measurements.
We can do this with a weighting function w(a), where a is the age of a measurement.
If we apply such a weighted average operation at every moment, we obtain a new
function providing a smoothed estimate of the position of the spaceship:
s
s t
( ) =

x a w t a da
( ) ( − ) (9.1)
This operation is called convolution. The convolution operation is typically
denoted with an asterisk:
s t x w t
( ) = ( ∗ )( ) (9.2)
In our example, w needs to be a valid probability density function, or the
output is not a weighted average. Also, w needs to be for all negative arguments,
0
or it will look into the future, which is presumably beyond our capabilities. These
limitations are particular to our example though. In general, convolution is defined
for any functions for which the above integral is defined, and may be used for other
purposes besides taking weighted averages.
In convolutional network terminology, the first argument (in this example, the
function x) to the convolution is often referred to as the input and the second
331
CHAPTER 9. CONVOLUTIONAL NETWORKS
argument (in this example, the function w) as the kernel. The output is sometimes
referred to as the .
feature map
In our example, the idea of a laser sensor that can provide measurements
at every instant in time is not realistic. Usually, when we work with data on a
computer, time will be discretized, and our sensor will provide data at regular
intervals. In our example, it might be more realistic to assume that our laser
provides a measurement once per second. The time index t can then take on only
integer values. If we now assume that x and w are defined only on integer t, we
can define the discrete convolution:
s t x w t
( ) = ( ∗ )( ) =
∞

a=−∞
x a w t a
( ) ( − ) (9.3)
In machine learning applications, the input is usually a multidimensional array
of data and the kernel is usually a multidimensional array of parameters that are
adapted by the learning algorithm. We will refer to these multidimensional arrays
as tensors. Because each element of the input and kernel must be explicitly stored
separately, we usually assume that these functions are zero everywhere but the
finite set of points for which we store the values. This means that in practice we
can implement the infinite summation as a summation over a finite number of
array elements.
Finally, we often use convolutions over more than one axis at a time. For
example, if we use a two-dimensional image I as our input, we probably also want
to use a two-dimensional kernel :
K
S i, j I K i, j
( ) = ( ∗ )( ) =

m

n
I m, n K i m, j n .
( ) ( − − ) (9.4)
Convolution is commutative, meaning we can equivalently write:
S i, j K I i, j
( ) = ( ∗ )( ) =

m

n
I i m, j n K m, n .
( − − ) ( ) (9.5)
Usually the latter formula is more straightforward to implement in a machine
learning library, because there is less variation in the range of valid values of m
and .
n
The commutative property of convolution arises because we have flipped the
kernel relative to the input, in the sense that as m increases, the index into the
input increases, but the index into the kernel decreases. The only reason to flip
the kernel is to obtain the commutative property. While the commutative property
332
CHAPTER 9. CONVOLUTIONAL NETWORKS
is useful for writing proofs, it is not usually an important property of a neural
network implementation. Instead, many neural network libraries implement a
related function called the cross-correlation, which is the same as convolution
but without flipping the kernel:
S i, j I K i, j
( ) = ( ∗ )( ) =

m

n
I i m, j n K m, n .
( + + ) ( ) (9.6)
Many machine learning libraries implement cross-correlation but call it convolution.
In this text we will follow this convention of calling both operations convolution,
and specify whether we mean to flip the kernel or not in contexts where kernel
flipping is relevant. In the context of machine learning, the learning algorithm will
learn the appropriate values of the kernel in the appropriate place, so an algorithm
based on convolution with kernel flipping will learn a kernel that is flipped relative
to the kernel learned by an algorithm without the flipping. It is also rare for
convolution to be used alone in machine learning; instead convolution is used
simultaneously with other functions, and the combination of these functions does
not commute regardless of whether the convolution operation flips its kernel or
not.
See figure for an example of convolution (without kernel flipping) applied
9.1
to a 2-D tensor.
Discrete convolution can be viewed as multiplication by a matrix. However, the
matrix has several entries constrained to be equal to other entries. For example,
for univariate discrete convolution, each row of the matrix is constrained to be
equal to the row above shifted by one element. This is known as a Toeplitz
matrix. In two dimensions, a doubly block circulant matrix corresponds to
convolution. In addition to these constraints that several elements be equal to
each other, convolution usually corresponds to a very sparse matrix (a matrix
whose entries are mostly equal to zero). This is because the kernel is usually much
smaller than the input image. Any neural network algorithm that works with
matrix multiplication and does not depend on specific properties of the matrix
structure should work with convolution, without requiring any further changes
to the neural network. Typical convolutional neural networks do make use of
further specializations in order to deal with large inputs efficiently, but these are
not strictly necessary from a theoretical perspective.
333
CHAPTER 9. CONVOLUTIONAL NETWORKS
a b c d
e f g h
i j k l
w x
y z
aw + bx +
ey + fz
aw + bx +
ey + fz
bw + cx +
fy + gz
bw + cx +
fy + gz
cw + dx +
gy + hz
cw + dx +
gy + hz
ew + fx +
iy + jz
ew + fx +
iy + jz
fw + gx +
jy + kz
fw + gx +
jy + kz
gw + hx +
ky + lz
gw + hx +
ky + lz
Input
Kernel
Output
Figure 9.1: An example of 2-D convolution without kernel-flipping. In this case we restrict
the output to only positions where the kernel lies entirely within the image, called “valid”
convolution in some contexts. We draw boxes with arrows to indicate how the upper-left
element of the output tensor is formed by applying the kernel to the corresponding
upper-left region of the input tensor.
334
CHAPTER 9. CONVOLUTIONAL NETWORKS
9.2 Motivation
Convolution leverages three important ideas that can help improve a machine
learning system: sparse interactions, parameter sharing and equivariant
representations. Moreover, convolution provides a means for working with
inputs of variable size. We now describe each of these ideas in turn.
Traditional neural network layers use matrix multiplication by a matrix of
parameters with a separate parameter describing the interaction between each input
unit and each output unit. This means every output unit interacts with every input
unit. Convolutional networks, however, typically have sparse interactions (also
referred to as sparse connectivity or sparse weights). This is accomplished by
making the kernel smaller than the input. For example, when processing an image,
the input image might have thousands or millions of pixels, but we can detect small,
meaningful features such as edges with kernels that occupy only tens or hundreds of
pixels. This means that we need to store fewer parameters, which both reduces the
memory requirements of the model and improves its statistical efficiency. It also
means that computing the output requires fewer operations. These improvements
in efficiency are usually quite large. If there are m inputs and n outputs, then
matrix multiplication requires m n
× parameters and the algorithms used in practice
have O(m n
× ) runtime (per example). If we limit the number of connections
each output may have to k, then the sparsely connected approach requires only
k n
× parameters and O(k n
× ) runtime. For many practical applications, it is
possible to obtain good performance on the machine learning task while keeping
k several orders of magnitude smaller than m. For graphical demonstrations of
sparse connectivity, see figure and figure . In a deep convolutional network,
9.2 9.3
units in the deeper layers may indirectly interact with a larger portion of the input,
as shown in figure . This allows the network to efficiently describe complicated
9.4
interactions between many variables by constructing such interactions from simple
building blocks that each describe only sparse interactions.
Parameter sharing refers to using the same parameter for more than one
function in a model. In a traditional neural net, each element of the weight matrix
is used exactly once when computing the output of a layer. It is multiplied by
one element of the input and then never revisited. As a synonym for parameter
sharing, one can say that a network has tied weights, because the value of the
weight applied to one input is tied to the value of a weight applied elsewhere. In
a convolutional neural net, each member of the kernel is used at every position
of the input (except perhaps some of the boundary pixels, depending on the
design decisions regarding the boundary). The parameter sharing used by the
convolution operation means that rather than learning a separate set of parameters
335
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
Figure 9.2: Sparse connectivity, viewed from below: We highlight one input unit, x3,
and also highlight the output units in s that are affected by this unit. (Top)When s is
formed by convolution with a kernel of width , only three outputs are affected by
3 x.
(Bottom)When is formed by matrix multiplication, connectivity is no longer sparse, so
s
all of the outputs are affected by x3.
336
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
Figure 9.3: Sparse connectivity, viewed from above: We highlight one output unit,s3,
and also highlight the input units in x that affect this unit. These units are known
as the receptive field of s3. (Top)When s is formed by convolution with a kernel of
width , only three inputs affect
3 s3. When
(Bottom) s is formed by matrix multiplication,
connectivity is no longer sparse, so all of the inputs affect s
3.
x1
x1 x2
x2 x3
x3
h2
h2
h1
h1 h3
h3
x4
x4
h4
h4
x5
x5
h5
h5
g2
g2
g1
g1 g3
g3 g4
g4 g5
g5
Figure 9.4: The receptive field of the units in the deeper layers of a convolutional network
is larger than the receptive field of the units in the shallow layers. This effect increases if
the network includes architectural features like strided convolution (figure ) or pooling
9.12
(section ). This means that even though
9.3 direct connections in a convolutional net are
very sparse, units in the deeper layers can be indirectly connected to all or most of the
input image.
337
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
x1
x1 x2
x2 x3
x3 x4
x4 x5
x5
s2
s2
s1
s1 s3
s3 s4
s4 s5
s5
Figure 9.5: Parameter sharing: Black arrows indicate the connections that use a particular
parameter in two different models. (Top)The black arrows indicate uses of the central
element of a 3-element kernel in a convolutional model. Due to parameter sharing, this
single parameter is used at all input locations. The single black arrow indicates
(Bottom)
the use of the central element of the weight matrix in a fully connected model. This model
has no parameter sharing so the parameter is used only once.
for every location, we learn only one set. This does not affect the runtime of
forward propagation—it is still O(k n
× )—but it does further reduce the storage
requirements of the model to k parameters. Recall that k is usually several orders
of magnitude less than m. Since m and n are usually roughly the same size, k is
practically insignificant compared to m n
× . Convolution is thus dramatically more
efficient than dense matrix multiplication in terms of the memory requirements
and statistical efficiency. For a graphical depiction of how parameter sharing works,
see figure .
9.5
As an example of both of these first two principles in action, figure shows
9.6
how sparse connectivity and parameter sharing can dramatically improve the
efficiency of a linear function for detecting edges in an image.
In the case of convolution, the particular form of parameter sharing causes the
layer to have a property called equivariance to translation. To say a function is
equivariant means that if the input changes, the output changes in the same way.
Specifically, a function f(x) is equivariant to a function g if f(g(x)) = g(f(x)).
In the case of convolution, if we let g be any function that translates the input,
i.e., shifts it, then the convolution function is equivariant to g. For example, let I
be a function giving image brightness at integer coordinates. Let g be a function
338
CHAPTER 9. CONVOLUTIONAL NETWORKS
mapping one image function to another image function, such that I
= g(I) is
the image function with I
(x, y) = I(x − 1, y). This shifts every pixel of I one
unit to the right. If we apply this transformation to I, then apply convolution,
the result will be the same as if we applied convolution to I
, then applied the
transformation g to the output. When processing time series data, this means
that convolution produces a sort of timeline that shows when different features
appear in the input. If we move an event later in time in the input, the exact
same representation of it will appear in the output, just later in time. Similarly
with images, convolution creates a 2-D map of where certain features appear in
the input. If we move the object in the input, its representation will move the
same amount in the output. This is useful for when we know that some function
of a small number of neighboring pixels is useful when applied to multiple input
locations. For example, when processing images, it is useful to detect edges in
the first layer of a convolutional network. The same edges appear more or less
everywhere in the image, so it is practical to share parameters across the entire
image. In some cases, we may not wish to share parameters across the entire
image. For example, if we are processing images that are cropped to be centered
on an individual’s face, we probably want to extract different features at different
locations—the part of the network processing the top of the face needs to look for
eyebrows, while the part of the network processing the bottom of the face needs to
look for a chin.
Convolution is not naturally equivariant to some other transformations, such
as changes in the scale or rotation of an image. Other mechanisms are necessary
for handling these kinds of transformations.
Finally, some kinds of data cannot be processed by neural networks defined by
matrix multiplication with a fixed-shape matrix. Convolution enables processing
of some of these kinds of data. We discuss this further in section .
9.7
9.3 Pooling
A typical layer of a convolutional network consists of three stages (see figure ).
9.7
In the first stage, the layer performs several convolutions in parallel to produce a
set of linear activations. In the second stage, each linear activation is run through
a nonlinear activation function, such as the rectified linear activation function.
This stage is sometimes called the detector stage. In the third stage, we use a
pooling function to modify the output of the layer further.
A pooling function replaces the output of the net at a certain location with a
summary statistic of the nearby outputs. For example, the max pooling (Zhou
339
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.6: Efficiency of edge detection. The image on the right was formed by taking
each pixel in the original image and subtracting the value of its neighboring pixel on the
left. This shows the strength of all of the vertically oriented edges in the input image,
which can be a useful operation for object detection. Both images are 280 pixels tall.
The input image is 320 pixels wide while the output image is 319 pixels wide. This
transformation can be described by a convolution kernel containing two elements, and
requires 319 × 280 × 3 = 267, 960 floating point operations (two multiplications and
one addition per output pixel) to compute using convolution. To describe the same
transformation with a matrix multiplication would take 320× 280× 319 × 280, or over
eight billion, entries in the matrix, making convolution four billion times more efficient for
representing this transformation. The straightforward matrix multiplication algorithm
performs over sixteen billion floating point operations, making convolution roughly 60,000
times more efficient computationally. Of course, most of the entries of the matrix would be
zero. If we stored only the nonzero entries of the matrix, then both matrix multiplication
and convolution would require the same number of floating point operations to compute.
The matrix would still need to contain 2 × 319 × 280 = 178, 640 entries. Convolution
is an extremely efficient way of describing transformations that apply the same linear
transformation of a small, local region across the entire input. (Photo credit: Paula
Goodfellow)
340
CHAPTER 9. CONVOLUTIONAL NETWORKS
Convolutional Layer
Input to layer
Convolution stage:
A ne transform
ffi
Detector stage:
Nonlinearity
e.g., rectified linear
Pooling stage
Next layer
Input to layers
Convolution layer:
A ne transform
ffi
Detector layer: Nonlinearity
e.g., rectified linear
Pooling layer
Next layer
Complex layer terminology Simple layer terminology
Figure 9.7: The components of a typical convolutional neural network layer. There are two
commonly used sets of terminology for describing these layers. (Left)In this terminology,
the convolutional net is viewed as a small number of relatively complex layers, with
each layer having many “stages.” In this terminology, there is a one-to-one mapping
between kernel tensors and network layers. In this book we generally use this terminology.
(Right)In this terminology, the convolutional net is viewed as a larger number of simple
layers; every step of processing is regarded as a layer in its own right. This means that
not every “layer” has parameters.
341
CHAPTER 9. CONVOLUTIONAL NETWORKS
and Chellappa 1988
, ) operation reports the maximum output within a rectangular
neighborhood. Other popular pooling functions include the average of a rectangular
neighborhood, the L2
norm of a rectangular neighborhood, or a weighted average
based on the distance from the central pixel.
In all cases, pooling helps to make the representation become approximately
invariant to small translations of the input. Invariance to translation means that
if we translate the input by a small amount, the values of most of the pooled
outputs do not change. See figure for an example of how this works.
9.8 Invariance
to local translation can be a very useful property if we care more about whether
some feature is present than exactly where it is. For example, when determining
whether an image contains a face, we need not know the location of the eyes with
pixel-perfect accuracy, we just need to know that there is an eye on the left side
of the face and an eye on the right side of the face. In other contexts, it is more
important to preserve the location of a feature. For example, if we want to find a
corner defined by two edges meeting at a specific orientation, we need to preserve
the location of the edges well enough to test whether they meet.
The use of pooling can be viewed as adding an infinitely strong prior that
the function the layer learns must be invariant to small translations. When this
assumption is correct, it can greatly improve the statistical efficiency of the network.
Pooling over spatial regions produces invariance to translation, but if we pool
over the outputs of separately parametrized convolutions, the features can learn
which transformations to become invariant to (see figure ).
9.9
Because pooling summarizes the responses over a whole neighborhood, it is
possible to use fewer pooling units than detector units, by reporting summary
statistics for pooling regions spaced k pixels apart rather than 1 pixel apart. See
figure for an example. This improves the computational efficiency of the
9.10
network because the next layer has roughly k times fewer inputs to process. When
the number of parameters in the next layer is a function of its input size (such as
when the next layer is fully connected and based on matrix multiplication) this
reduction in the input size can also result in improved statistical efficiency and
reduced memory requirements for storing the parameters.
For many tasks, pooling is essential for handling inputs of varying size. For
example, if we want to classify images of variable size, the input to the classification
layer must have a fixed size. This is usually accomplished by varying the size of an
offset between pooling regions so that the classification layer always receives the
same number of summary statistics regardless of the input size. For example, the
final pooling layer of the network may be defined to output four sets of summary
statistics, one for each quadrant of an image, regardless of the image size.
342
CHAPTER 9. CONVOLUTIONAL NETWORKS
0.1 1. 0.2
1.
1. 1.
0.1
0.2
... ...
... ...
0.3 0.1 1.
1.
0.3 1.
0.2
1.
... ...
... ...
DETECTOR STAGE
POOLING STAGE
POOLING STAGE
DETECTOR STAGE
Figure 9.8: Max pooling introduces invariance. (Top)A view of the middle of the output
of a convolutional layer. The bottom row shows outputs of the nonlinearity. The top
row shows the outputs of max pooling, with a stride of one pixel between pooling regions
and a pooling region width of three pixels. A view of the same network, after
(Bottom)
the input has been shifted to the right by one pixel. Every value in the bottom row has
changed, but only half of the values in the top row have changed, because the max pooling
units are only sensitive to the maximum value in the neighborhood, not its exact location.
343
CHAPTER 9. CONVOLUTIONAL NETWORKS
Large response
in pooling unit
Large response
in pooling unit
Large
response
in detector
unit 1
Large
response
in detector
unit 3
Figure 9.9: Example of learned invariances: A pooling unit that pools over multiple features
that are learned with separate parameters can learn to be invariant to transformations of
the input. Here we show how a set of three learned filters and a max pooling unit can learn
to become invariant to rotation. All three filters are intended to detect a hand-written 5.
Each filter attempts to match a slightly different orientation of the 5. When a 5 appears in
the input, the corresponding filter will match it and cause a large activation in a detector
unit. The max pooling unit then has a large activation regardless of which detector unit
was activated. We show here how the network processes two different inputs, resulting
in two different detector units being activated. The effect on the pooling unit is roughly
the same either way. This principle is leveraged by maxout networks (Goodfellow et al.,
2013a) and other convolutional networks. Max pooling over spatial positions is naturally
invariant to translation; this multi-channel approach is only necessary for learning other
transformations.
0.1 1. 0.2
1. 0.2
0.1
0.1
0.0 0.1
Figure 9.10: Pooling with downsampling. Here we use max-pooling with a pool width of
three and a stride between pools of two. This reduces the representation size by a factor
of two, which reduces the computational and statistical burden on the next layer. Note
that the rightmost pooling region has a smaller size, but must be included if we do not
want to ignore some of the detector units.
344
CHAPTER 9. CONVOLUTIONAL NETWORKS
Some theoretical work gives guidance as to which kinds of pooling one should
use in various situations ( , ). It is also possible to dynamically
Boureau et al. 2010
pool features together, for example, by running a clustering algorithm on the
locations of interesting features ( , ). This approach yields a
Boureau et al. 2011
different set of pooling regions for each image. Another approach is to learn a
single pooling structure that is then applied to all images ( , ).
Jia et al. 2012
Pooling can complicate some kinds of neural network architectures that use
top-down information, such as Boltzmann machines and autoencoders. These
issues will be discussed further when we present these types of networks in part .
III
Pooling in convolutional Boltzmann machines is presented in section . The
20.6
inverse-like operations on pooling units needed in some differentiable networks will
be covered in section .
20.10.6
Some examples of complete convolutional network architectures for classification
using convolution and pooling are shown in figure .
9.11
9.4 Convolution and Pooling as an Infinitely Strong
Prior
Recall the concept of a prior probability distribution from section . This is
5.2
a probability distribution over the parameters of a model that encodes our beliefs
about what models are reasonable, before we have seen any data.
Priors can be considered weak or strong depending on how concentrated the
probability density in the prior is. A weak prior is a prior distribution with high
entropy, such as a Gaussian distribution with high variance. Such a prior allows
the data to move the parameters more or less freely. A strong prior has very low
entropy, such as a Gaussian distribution with low variance. Such a prior plays a
more active role in determining where the parameters end up.
An infinitely strong prior places zero probability on some parameters and says
that these parameter values are completely forbidden, regardless of how much
support the data gives to those values.
We can imagine a convolutional net as being similar to a fully connected net,
but with an infinitely strong prior over its weights. This infinitely strong prior
says that the weights for one hidden unit must be identical to the weights of its
neighbor, but shifted in space. The prior also says that the weights must be zero,
except for in the small, spatially contiguous receptive field assigned to that hidden
unit. Overall, we can think of the use of convolution as introducing an infinitely
strong prior probability distribution over the parameters of a layer. This prior
345
CHAPTER 9. CONVOLUTIONAL NETWORKS
Input image:
256x256x3
Output of
convolution +
ReLU: 256x256x64
Output of pooling
with stride 4:
64x64x64
Output of
convolution +
ReLU: 64x64x64
Output of pooling
with stride 4:
16x16x64
Output of reshape to
vector:
16,384 units
Output of matrix
multiply: 1,000 units
Output of softmax:
1,000 class
probabilities
Input image:
256x256x3
Output of
convolution +
ReLU: 256x256x64
Output of pooling
with stride 4:
64x64x64
Output of
convolution +
ReLU: 64x64x64
Output of pooling to
3x3 grid: 3x3x64
Output of reshape to
vector:
576 units
Output of matrix
multiply: 1,000 units
Output of softmax:
1,000 class
probabilities
Input image:
256x256x3
Output of
convolution +
ReLU: 256x256x64
Output of pooling
with stride 4:
64x64x64
Output of
convolution +
ReLU: 64x64x64
Output of
convolution:
16x16x1,000
Output of average
pooling: 1x1x1,000
Output of softmax:
1,000 class
probabilities
Output of pooling
with stride 4:
16x16x64
Figure 9.11: Examples of architectures for classification with convolutional networks. The
specific strides and depths used in this figure are not advisable for real use; they are
designed to be very shallow in order to fit onto the page. Real convolutional networks
also often involve significant amounts of branching, unlike the chain structures used
here for simplicity. (Left)A convolutional network that processes a fixed image size.
After alternating between convolution and pooling for a few layers, the tensor for the
convolutional feature map is reshaped to flatten out the spatial dimensions. The rest
of the network is an ordinary feedforward network classifier, as described in chapter .
6
(Center)A convolutional network that processes a variable-sized image, but still maintains
a fully connected section. This network uses a pooling operation with variably-sized pools
but a fixed number of pools, in order to provide a fixed-size vector of 576 units to the
fully connected portion of the network. A convolutional network that does not
(Right)
have any fully connected weight layer. Instead, the last convolutional layer outputs one
feature map per class. The model presumably learns a map of how likely each class is to
occur at each spatial location. Averaging a feature map down to a single value provides
the argument to the softmax classifier at the top.
346
CHAPTER 9. CONVOLUTIONAL NETWORKS
says that the function the layer should learn contains only local interactions and is
equivariant to translation. Likewise, the use of pooling is an infinitely strong prior
that each unit should be invariant to small translations.
Of course, implementing a convolutional net as a fully connected net with an
infinitely strong prior would be extremely computationally wasteful. But thinking
of a convolutional net as a fully connected net with an infinitely strong prior can
give us some insights into how convolutional nets work.
One key insight is that convolution and pooling can cause underfitting. Like
any prior, convolution and pooling are only useful when the assumptions made
by the prior are reasonably accurate. If a task relies on preserving precise spatial
information, then using pooling on all features can increase the training error.
Some convolutional network architectures ( , ) are designed to
Szegedy et al. 2014a
use pooling on some channels but not on other channels, in order to get both
highly invariant features and features that will not underfit when the translation
invariance prior is incorrect. When a task involves incorporating information from
very distant locations in the input, then the prior imposed by convolution may be
inappropriate.
Another key insight from this view is that we should only compare convolu-
tional models to other convolutional models in benchmarks of statistical learning
performance. Models that do not use convolution would be able to learn even
if we permuted all of the pixels in the image. For many image datasets, there
are separate benchmarks for models that are permutation invariant and must
discover the concept of topology via learning, and models that have the knowledge
of spatial relationships hard-coded into them by their designer.
9.5 Variants of the Basic Convolution Function
When discussing convolution in the context of neural networks, we usually do
not refer exactly to the standard discrete convolution operation as it is usually
understood in the mathematical literature. The functions used in practice differ
slightly. Here we describe these differences in detail, and highlight some useful
properties of the functions used in neural networks.
First, when we refer to convolution in the context of neural networks, we usually
actually mean an operation that consists of many applications of convolution in
parallel. This is because convolution with a single kernel can only extract one kind
of feature, albeit at many spatial locations. Usually we want each layer of our
network to extract many kinds of features, at many locations.
347
CHAPTER 9. CONVOLUTIONAL NETWORKS
Additionally, the input is usually not just a grid of real values. Rather, it is a
grid of vector-valued observations. For example, a color image has a red, green
and blue intensity at each pixel. In a multilayer convolutional network, the input
to the second layer is the output of the first layer, which usually has the output
of many different convolutions at each position. When working with images, we
usually think of the input and output of the convolution as being 3-D tensors, with
one index into the different channels and two indices into the spatial coordinates
of each channel. Software implementations usually work in batch mode, so they
will actually use 4-D tensors, with the fourth axis indexing different examples in
the batch, but we will omit the batch axis in our description here for simplicity.
Because convolutional networks usually use multi-channel convolution, the
linear operations they are based on are not guaranteed to be commutative, even if
kernel-flipping is used. These multi-channel operations are only commutative if
each operation has the same number of output channels as input channels.
Assume we have a 4-D kernel tensor K with element Ki,j,k,l giving the connection
strength between a unit in channel i of the output and a unit in channel j of the
input, with an offset of k rows and l columns between the output unit and the
input unit. Assume our input consists of observed data V with element Vi,j,k giving
the value of the input unit within channel i at row j and column k. Assume our
output consists of Z with the same format as V. If Z is produced by convolving K
across without flipping , then
V K
Zi,j,k =

l,m,n
Vl,j m ,k n
+ −1 + −1Ki,l,m,n (9.7)
where the summation over l, m and n is over all values for which the tensor indexing
operations inside the summation is valid. In linear algebra notation, we index into
arrays using a for the first entry. This necessitates the
1 −1 in the above formula.
Programming languages such as C and Python index starting from , rendering
0
the above expression even simpler.
We may want to skip over some positions of the kernel in order to reduce the
computational cost (at the expense of not extracting our features as finely). We
can think of this as downsampling the output of the full convolution function. If
we want to sample only every s pixels in each direction in the output, then we can
define a downsampled convolution function such that
c
Zi,j,k = ( )
c K V
, ,s i,j,k =

l,m,n

Vl, j s m, k s n
( − ×
1) + ( − ×
1) + Ki,l,m,n

. (9.8)
We refer to s as the stride of this downsampled convolution. It is also possible
348
CHAPTER 9. CONVOLUTIONAL NETWORKS
to define a separate stride for each direction of motion. See figure for an
9.12
illustration.
One essential feature of any convolutional network implementation is the ability
to implicitly zero-pad the input V in order to make it wider. Without this feature,
the width of the representation shrinks by one pixel less than the kernel width
at each layer. Zero padding the input allows us to control the kernel width and
the size of the output independently. Without zero padding, we are forced to
choose between shrinking the spatial extent of the network rapidly and using small
kernels—both scenarios that significantly limit the expressive power of the network.
See figure for an example.
9.13
Three special cases of the zero-padding setting are worth mentioning. One is
the extreme case in which no zero-padding is used whatsoever, and the convolution
kernel is only allowed to visit positions where the entire kernel is contained entirely
within the image. In MATLAB terminology, this is called valid convolution. In
this case, all pixels in the output are a function of the same number of pixels in
the input, so the behavior of an output pixel is somewhat more regular. However,
the size of the output shrinks at each layer. If the input image has width m and
the kernel has width k, the output will be of width m k
− + 1. The rate of this
shrinkage can be dramatic if the kernels used are large. Since the shrinkage is
greater than 0, it limits the number of convolutional layers that can be included
in the network. As layers are added, the spatial dimension of the network will
eventually drop to 1 × 1, at which point additional layers cannot meaningfully
be considered convolutional. Another special case of the zero-padding setting is
when just enough zero-padding is added to keep the size of the output equal to
the size of the input. MATLAB calls this same convolution. In this case, the
network can contain as many convolutional layers as the available hardware can
support, since the operation of convolution does not modify the architectural
possibilities available to the next layer. However, the input pixels near the border
influence fewer output pixels than the input pixels near the center. This can make
the border pixels somewhat underrepresented in the model. This motivates the
other extreme case, which MATLAB refers to as full convolution, in which enough
zeroes are added for every pixel to be visited k times in each direction, resulting
in an output image of width m + k − 1. In this case, the output pixels near the
border are a function of fewer pixels than the output pixels near the center. This
can make it difficult to learn a single kernel that performs well at all positions in
the convolutional feature map. Usually the optimal amount of zero padding (in
terms of test set classification accuracy) lies somewhere between “valid” and “same”
convolution.
349
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s1
s1 s2
s2
x4
x4 x5
x5
s3
s3
x1
x1 x2
x2 x3
x3
z2
z2
z1
z1 z3
z3
x4
x4
z4
z4
x5
x5
z5
z5
s1
s1 s2
s2 s3
s3
Strided
convolution
Downsampling
Convolution
Figure 9.12: Convolution with a stride. In this example, we use a stride of two.
(Top)Convolution with a stride length of two implemented in a single operation. (Bot-
tom)Convolution with a stride greater than one pixel is mathematically equivalent to
convolution with unit stride followed by downsampling. Obviously, the two-step approach
involving downsampling is computationally wasteful, because it computes many values
that are then discarded.
350
CHAPTER 9. CONVOLUTIONAL NETWORKS
... ...
...
... ...
... ...
... ...
Figure 9.13: The effect of zero padding on network size: Consider a convolutional network
with a kernel of width six at every layer. In this example, we do not use any pooling, so
only the convolution operation itself shrinks the network size. (Top)In this convolutional
network, we do not use any implicit zero padding. This causes the representation to
shrink by five pixels at each layer. Starting from an input of sixteen pixels, we are only
able to have three convolutional layers, and the last layer does not ever move the kernel,
so arguably only two of the layers are truly convolutional. The rate of shrinking can
be mitigated by using smaller kernels, but smaller kernels are less expressive and some
shrinking is inevitable in this kind of architecture. By adding five implicit zeroes
(Bottom)
to each layer, we prevent the representation from shrinking with depth. This allows us to
make an arbitrarily deep convolutional network.
351
CHAPTER 9. CONVOLUTIONAL NETWORKS
In some cases, we do not actually want to use convolution, but rather locally
connected layers ( , , ). In this case, the adjacency matrix in the
LeCun 1986 1989
graph of our MLP is the same, but every connection has its own weight, specified
by a 6-D tensor W. The indices into W are respectively: i, the output channel,
j, the output row, k, the output column, l, the input channel, m, the row offset
within the input, and n, the column offset within the input. The linear part of a
locally connected layer is then given by
Zi,j,k =

l,m,n
[Vl,j m ,k n
+ −1 + −1wi,j,k,l,m,n]. (9.9)
This is sometimes also called unshared convolution, because it is a similar oper-
ation to discrete convolution with a small kernel, but without sharing parameters
across locations. Figure compares local connections, convolution, and full
9.14
connections.
Locally connected layers are useful when we know that each feature should be
a function of a small part of space, but there is no reason to think that the same
feature should occur across all of space. For example, if we want to tell if an image
is a picture of a face, we only need to look for the mouth in the bottom half of the
image.
It can also be useful to make versions of convolution or locally connected layers
in which the connectivity is further restricted, for example to constrain each output
channel i to be a function of only a subset of the input channels l. A common
way to do this is to make the first m output channels connect to only the first
n input channels, the second m output channels connect to only the second n
input channels, and so on. See figure for an example. Modeling interactions
9.15
between few channels allows the network to have fewer parameters in order to
reduce memory consumption and increase statistical efficiency, and also reduces
the amount of computation needed to perform forward and back-propagation. It
accomplishes these goals without reducing the number of hidden units.
Tiled convolution ( , ; , ) offers a com-
Gregor and LeCun 2010a Le et al. 2010
promise between a convolutional layer and a locally connected layer. Rather than
learning a separate set of weights at spatial location, we learn a set of kernels
every
that we rotate through as we move through space. This means that immediately
neighboring locations will have different filters, like in a locally connected layer,
but the memory requirements for storing the parameters will increase only by a
factor of the size of this set of kernels, rather than the size of the entire output
feature map. See figure for a comparison of locally connected layers, tiled
9.16
convolution, and standard convolution.
352
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
x1
x1 x2
x2
s1
s1 s3
s3
x5
x5
s5
s5
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
a b a b a b a b a
a b c d e f g h i
x4
x4
x3
x3
s4
s4
s2
s2
Figure 9.14: Comparison of local connections, convolution, and full connections.
(Top)A locally connected layer with a patch size of two pixels. Each edge is labeled with
a unique letter to show that each edge is associated with its own weight parameter.
(Center)A convolutional layer with a kernel width of two pixels. This model has exactly
the same connectivity as the locally connected layer. The difference lies not in which units
interact with each other, but in how the parameters are shared. The locally connected layer
has no parameter sharing. The convolutional layer uses the same two weights repeatedly
across the entire input, as indicated by the repetition of the letters labeling each edge.
(Bottom)A fully connected layer resembles a locally connected layer in the sense that each
edge has its own parameter (there are too many to label explicitly with letters in this
diagram). However, it does not have the restricted connectivity of the locally connected
layer.
353
CHAPTER 9. CONVOLUTIONAL NETWORKS
Input Tensor
Output Tensor
Spatial coordinates
Channel
coordinates
Figure 9.15: A convolutional network with the first two output channels connected to
only the first two input channels, and the second two output channels connected to only
the second two input channels.
354
CHAPTER 9. CONVOLUTIONAL NETWORKS
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
a b a b a b a b a
a b c d e f g h i
x1
x1 x2
x2 x3
x3
s2
s2
s1
s1 s3
s3
x4
x4
s4
s4
x5
x5
s5
s5
a b c d a b c d a
Figure 9.16: A comparison of locally connected layers, tiled convolution, and standard
convolution. All three have the same sets of connections between units, when the same
size of kernel is used. This diagram illustrates the use of a kernel that is two pixels wide.
The differences between the methods lies in how they share parameters. (Top)A locally
connected layer has no sharing at all. We indicate that each connection has its own weight
by labeling each connection with a unique letter. Tiled convolution has a set of
(Center)
t different kernels. Here we illustrate the case of t = 2. One of these kernels has edges
labeled “a” and “b,” while the other has edges labeled “c” and “d.” Each time we move one
pixel to the right in the output, we move on to using a different kernel. This means that,
like the locally connected layer, neighboring units in the output have different parameters.
Unlike the locally connected layer, after we have gone through allt available kernels,
we cycle back to the first kernel. If two output units are separated by a multiple of t
steps, then they share parameters. Traditional convolution is equivalent to tiled
(Bottom)
convolution with t = 1. There is only one kernel and it is applied everywhere, as indicated
in the diagram by using the kernel with weights labeled “a” and “b” everywhere.
355
CHAPTER 9. CONVOLUTIONAL NETWORKS
To define tiled convolution algebraically, let k be a 6-D tensor, where two of
the dimensions correspond to different locations in the output map. Rather than
having a separate index for each location in the output map, output locations cycle
through a set of t different choices of kernel stack in each direction. If t is equal to
the output width, this is the same as a locally connected layer.
Zi,j,k =

l,m,n
Vl,j m ,k n
+ −1 + −1Ki,l,m,n,j t ,k t
% +1 % +1, (9.10)
where is the modulo operation, with
% t%t = 0 (
, t + 1)%t = 1, etc. It is
straightforward to generalize this equation to use a different tiling range for each
dimension.
Both locally connected layers and tiled convolutional layers have an interesting
interaction with max-pooling: the detector units of these layers are driven by
different filters. If these filters learn to detect different transformed versions of
the same underlying features, then the max-pooled units become invariant to the
learned transformation (see figure ). Convolutional layers are hard-coded to be
9.9
invariant specifically to translation.
Other operations besides convolution are usually necessary to implement a
convolutional network. To perform learning, one must be able to compute the
gradient with respect to the kernel, given the gradient with respect to the outputs.
In some simple cases, this operation can be performed using the convolution
operation, but many cases of interest, including the case of stride greater than 1,
do not have this property.
Recall that convolution is a linear operation and can thus be described as a
matrix multiplication (if we first reshape the input tensor into a flat vector). The
matrix involved is a function of the convolution kernel. The matrix is sparse and
each element of the kernel is copied to several elements of the matrix. This view
helps us to derive some of the other operations needed to implement a convolutional
network.
Multiplication by the transpose of the matrix defined by convolution is one
such operation. This is the operation needed to back-propagate error derivatives
through a convolutional layer, so it is needed to train convolutional networks
that have more than one hidden layer. This same operation is also needed if we
wish to reconstruct the visible units from the hidden units ( , ).
Simard et al. 1992
Reconstructing the visible units is an operation commonly used in the models
described in part of this book, such as autoencoders, RBMs, and sparse coding.
III
Transpose convolution is necessary to construct convolutional versions of those
models. Like the kernel gradient operation, this input gradient operation can be
356
CHAPTER 9. CONVOLUTIONAL NETWORKS
implemented using a convolution in some cases, but in the general case requires
a third operation to be implemented. Care must be taken to coordinate this
transpose operation with the forward propagation. The size of the output that the
transpose operation should return depends on the zero padding policy and stride of
the forward propagation operation, as well as the size of the forward propagation’s
output map. In some cases, multiple sizes of input to forward propagation can
result in the same size of output map, so the transpose operation must be explicitly
told what the size of the original input was.
These three operations—convolution, backprop from output to weights, and
backprop from output to inputs—are sufficient to compute all of the gradients
needed to train any depth of feedforward convolutional network, as well as to train
convolutional networks with reconstruction functions based on the transpose of
convolution. See ( ) for a full derivation of the equations in the
Goodfellow 2010
fully general multi-dimensional, multi-example case. To give a sense of how these
equations work, we present the two dimensional, single example version here.
Suppose we want to train a convolutional network that incorporates strided
convolution of kernel stack K applied to multi-channel image V with stride s as
defined by c(K V
, ,s) as in equation . Suppose we want to minimize some loss
9.8
function J(V K
, ). During forward propagation, we will need to use c itself to
output Z, which is then propagated through the rest of the network and used to
compute the cost function J. During back-propagation, we will receive a tensor G
such that Gi,j,k = ∂
∂Zi,j,k
J , .
(V K)
To train the network, we need to compute the derivatives with respect to the
weights in the kernel. To do so, we can use a function
g , ,s
(G V )i,j,k,l =
∂
∂Ki,j,k,l
J ,
(V K) =

m,n
Gi,m,nVj, m s k, n s l
( − ×
1) + ( − ×
1) + . (9.11)
If this layer is not the bottom layer of the network, we will need to compute
the gradient with respect to V in order to back-propagate the error farther down.
To do so, we can use a function
h , , s
(K G )i,j,k =
∂
∂Vi,j,k
J ,
(V K) (9.12)
=

l,m
s.t.
( 1) + =
l− ×s m j

n,p
s.t.
( 1) + =
n− ×s p k

q
Kq,i,m,pGq,l,n. (9.13)
Autoencoder networks, described in chapter , are feedforward networks
14
trained to copy their input to their output. A simple example is the PCA algorithm,
357
CHAPTER 9. CONVOLUTIONAL NETWORKS
that copies its input x to an approximate reconstruction r using the function
W
Wx. It is common for more general autoencoders to use multiplication
by the transpose of the weight matrix just as PCA does. To make such models
convolutional, we can use the function h to perform the transpose of the convolution
operation. Suppose we have hidden units H in the same format as Z and we define
a reconstruction
R K H
= (
h , ,s .
) (9.14)
In order to train the autoencoder, we will receive the gradient with respect
to R as a tensor E. To train the decoder, we need to obtain the gradient with
respect to K. This is given by g(H E
, , s). To train the encoder, we need to obtain
the gradient with respect to H. This is given by c(K E
, , s). It is also possible to
differentiate through g using c and h, but these operations are not needed for the
back-propagation algorithm on any standard network architectures.
Generally, we do not use only a linear operation in order to transform from
the inputs to the outputs in a convolutional layer. We generally also add some
bias term to each output before applying the nonlinearity. This raises the question
of how to share parameters among the biases. For locally connected layers it is
natural to give each unit its own bias, and for tiled convolution, it is natural to
share the biases with the same tiling pattern as the kernels. For convolutional
layers, it is typical to have one bias per channel of the output and share it across
all locations within each convolution map. However, if the input is of known, fixed
size, it is also possible to learn a separate bias at each location of the output map.
Separating the biases may slightly reduce the statistical efficiency of the model, but
also allows the model to correct for differences in the image statistics at different
locations. For example, when using implicit zero padding, detector units at the
edge of the image receive less total input and may need larger biases.
9.6 Structured Outputs
Convolutional networks can be used to output a high-dimensional, structured
object, rather than just predicting a class label for a classification task or a real
value for a regression task. Typically this object is just a tensor, emitted by a
standard convolutional layer. For example, the model might emit a tensor S, where
Si,j,k is the probability that pixel (j, k) of the input to the network belongs to class
i. This allows the model to label every pixel in an image and draw precise masks
that follow the outlines of individual objects.
One issue that often comes up is that the output plane can be smaller than the
358
CHAPTER 9. CONVOLUTIONAL NETWORKS
Ŷ
(1)
Ŷ
(1)
Ŷ
(2)
Ŷ
(2)
Ŷ
(3)
Ŷ
(3)
H(1)
H(1)
H(2)
H(2)
H(3)
H(3)
X
X
U U U
V V V
W W
Figure 9.17: An example of a recurrent convolutional network for pixel labeling. The
input is an image tensor , with axes corresponding to image rows, image columns, and
X
channels (red, green, blue). The goal is to output a tensor of labelsŶ , with a probability
distribution over labels for each pixel. This tensor has axes corresponding to image rows,
image columns, and the different classes. Rather than outputting ˆ
Y in a single shot, the
recurrent network iteratively refines its estimate Ŷ by using a previous estimate of Ŷ
as input for creating a new estimate. The same parameters are used for each updated
estimate, and the estimate can be refined as many times as we wish. The tensor of
convolution kernels U is used on each step to compute the hidden representation given the
input image. The kernel tensor V is used to produce an estimate of the labels given the
hidden values. On all but the first step, the kernels W are convolved over Ŷ to provide
input to the hidden layer. On the first time step, this term is replaced by zero. Because
the same parameters are used on each step, this is an example of a recurrent network, as
described in chapter .
10
input plane, as shown in figure . In the kinds of architectures typically used for
9.13
classification of a single object in an image, the greatest reduction in the spatial
dimensions of the network comes from using pooling layers with large stride. In
order to produce an output map of similar size as the input, one can avoid pooling
altogether ( , ). Another strategy is to simply emit a lower-resolution
Jain et al. 2007
grid of labels ( , , ). Finally, in principle, one could
Pinheiro and Collobert 2014 2015
use a pooling operator with unit stride.
One strategy for pixel-wise labeling of images is to produce an initial guess
of the image labels, then refine this initial guess using the interactions between
neighboring pixels. Repeating this refinement step several times corresponds to
using the same convolutions at each stage, sharing weights between the last layers of
the deep net ( , ). This makes the sequence of computations performed
Jain et al. 2007
by the successive convolutional layers with weights shared across layers a particular
kind of recurrent network ( , , ). Figure shows
Pinheiro and Collobert 2014 2015 9.17
the architecture of such a recurrent convolutional network.
359
CHAPTER 9. CONVOLUTIONAL NETWORKS
Once a prediction for each pixel is made, various methods can be used to
further process these predictions in order to obtain a segmentation of the image
into regions ( , ;
Briggman et al. 2009 Turaga 2010 Farabet 2013
et al., ; et al., ).
The general idea is to assume that large groups of contiguous pixels tend to be
associated with the same label. Graphical models can describe the probabilistic
relationships between neighboring pixels. Alternatively, the convolutional network
can be trained to maximize an approximation of the graphical model training
objective ( , ; , ).
Ning et al. 2005 Thompson et al. 2014
9.7 Data Types
The data used with a convolutional network usually consists of several channels,
each channel being the observation of a different quantity at some point in space
or time. See table for examples of data types with different dimensionalities
9.1
and number of channels.
For an example of convolutional networks applied to video, see Chen et al.
( ).
2010
So far we have discussed only the case where every example in the train and test
data has the same spatial dimensions. One advantage to convolutional networks
is that they can also process inputs with varying spatial extents. These kinds of
input simply cannot be represented by traditional, matrix multiplication-based
neural networks. This provides a compelling reason to use convolutional networks
even when computational cost and overfitting are not significant issues.
For example, consider a collection of images, where each image has a different
width and height. It is unclear how to model such inputs with a weight matrix of
fixed size. Convolution is straightforward to apply; the kernel is simply applied a
different number of times depending on the size of the input, and the output of the
convolution operation scales accordingly. Convolution may be viewed as matrix
multiplication; the same convolution kernel induces a different size of doubly block
circulant matrix for each size of input. Sometimes the output of the network is
allowed to have variable size as well as the input, for example if we want to assign
a class label to each pixel of the input. In this case, no further design work is
necessary. In other cases, the network must produce some fixed-size output, for
example if we want to assign a single class label to the entire image. In this case
we must make some additional design steps, like inserting a pooling layer whose
pooling regions scale in size proportional to the size of the input, in order to
maintain a fixed number of pooled outputs. Some examples of this kind of strategy
are shown in figure .
9.11
360
CHAPTER 9. CONVOLUTIONAL NETWORKS
Single channel Multi-channel
1-D Audio waveform: The axis we
convolve over corresponds to
time. We discretize time and
measure the amplitude of the
waveform once per time step.
Skeleton animation data: Anima-
tions of 3-D computer-rendered
characters are generated by alter-
ing the pose of a “skeleton” over
time. At each point in time, the
pose of the character is described
by a specification of the angles of
each of the joints in the charac-
ter’s skeleton. Each channel in
the data we feed to the convolu-
tional model represents the angle
about one axis of one joint.
2-D Audio data that has been prepro-
cessed with a Fourier transform:
We can transform the audio wave-
form into a 2D tensor with dif-
ferent rows corresponding to dif-
ferent frequencies and different
columns corresponding to differ-
ent points in time. Using convolu-
tion in the time makes the model
equivariant to shifts in time. Us-
ing convolution across the fre-
quency axis makes the model
equivariant to frequency, so that
the same melody played in a dif-
ferent octave produces the same
representation but at a different
height in the network’s output.
Color image data: One channel
contains the red pixels, one the
green pixels, and one the blue
pixels. The convolution kernel
moves over both the horizontal
and vertical axes of the image,
conferring translation equivari-
ance in both directions.
3-D Volumetric data: A common
source of this kind of data is med-
ical imaging technology, such as
CT scans.
Color video data: One axis corre-
sponds to time, one to the height
of the video frame, and one to
the width of the video frame.
Table 9.1: Examples of different formats of data that can be used with convolutional
networks.
361
CHAPTER 9. CONVOLUTIONAL NETWORKS
Note that the use of convolution for processing variable sized inputs only makes
sense for inputs that have variable size because they contain varying amounts
of observation of the same kind of thing—different lengths of recordings over
time, different widths of observations over space, etc. Convolution does not make
sense if the input has variable size because it can optionally include different
kinds of observations. For example, if we are processing college applications, and
our features consist of both grades and standardized test scores, but not every
applicant took the standardized test, then it does not make sense to convolve the
same weights over both the features corresponding to the grades and the features
corresponding to the test scores.
9.8 Efficient Convolution Algorithms
Modern convolutional network applications often involve networks containing more
than one million units. Powerful implementations exploiting parallel computation
resources, as discussed in section , are essential. However, in many cases it
12.1
is also possible to speed up convolution by selecting an appropriate convolution
algorithm.
Convolution is equivalent to converting both the input and the kernel to the
frequency domain using a Fourier transform, performing point-wise multiplication
of the two signals, and converting back to the time domain using an inverse
Fourier transform. For some problem sizes, this can be faster than the naive
implementation of discrete convolution.
When a d-dimensional kernel can be expressed as the outer product of d
vectors, one vector per dimension, the kernel is called separable. When the
kernel is separable, naive convolution is inefficient. It is equivalent to compose d
one-dimensional convolutions with each of these vectors. The composed approach
is significantly faster than performing one d-dimensional convolution with their
outer product. The kernel also takes fewer parameters to represent as vectors.
If the kernel is w elements wide in each dimension, then naive multidimensional
convolution requires O(wd
) runtime and parameter storage space, while separable
convolution requires O(w d
× ) runtime and parameter storage space. Of course,
not every convolution can be represented in this way.
Devising faster ways of performing convolution or approximate convolution
without harming the accuracy of the model is an active area of research. Even tech-
niques that improve the efficiency of only forward propagation are useful because
in the commercial setting, it is typical to devote more resources to deployment of
a network than to its training.
362
CHAPTER 9. CONVOLUTIONAL NETWORKS
9.9 Random or Unsupervised Features
Typically, the most expensive part of convolutional network training is learning the
features. The output layer is usually relatively inexpensive due to the small number
of features provided as input to this layer after passing through several layers of
pooling. When performing supervised training with gradient descent, every gradient
step requires a complete run of forward propagation and backward propagation
through the entire network. One way to reduce the cost of convolutional network
training is to use features that are not trained in a supervised fashion.
There are three basic strategies for obtaining convolution kernels without
supervised training. One is to simply initialize them randomly. Another is to
design them by hand, for example by setting each kernel to detect edges at a
certain orientation or scale. Finally, one can learn the kernels with an unsupervised
criterion. For example, ( ) apply
Coates et al. 2011 k-means clustering to small
image patches, then use each learned centroid as a convolution kernel. Part III
describes many more unsupervised learning approaches. Learning the features
with an unsupervised criterion allows them to be determined separately from the
classifier layer at the top of the architecture. One can then extract the features for
the entire training set just once, essentially constructing a new training set for the
last layer. Learning the last layer is then typically a convex optimization problem,
assuming the last layer is something like logistic regression or an SVM.
Random filters often work surprisingly well in convolutional networks (Jarrett
et al. et al. et al.
, ;
2009 Saxe , ;
2011 Pinto , ;
2011 Cox and Pinto 2011 Saxe
, ). et al.
( ) showed that layers consisting of convolution following by pooling naturally
2011
become frequency selective and translation invariant when assigned random weights.
They argue that this provides an inexpensive way to choose the architecture of
a convolutional network: first evaluate the performance of several convolutional
network architectures by training only the last layer, then take the best of these
architectures and train the entire architecture using a more expensive approach.
An intermediate approach is to learn the features, but using methods that do
not require full forward and back-propagation at every gradient step. As with
multilayer perceptrons, we use greedy layer-wise pretraining, to train the first layer
in isolation, then extract all features from the first layer only once, then train the
second layer in isolation given those features, and so on. Chapter has described
8
how to perform supervised greedy layer-wise pretraining, and part extends this
III
to greedy layer-wise pretraining using an unsupervised criterion at each layer. The
canonical example of greedy layer-wise pretraining of a convolutional model is the
convolutional deep belief network ( , ). Convolutional networks offer
Lee et al. 2009
363
CHAPTER 9. CONVOLUTIONAL NETWORKS
us the opportunity to take the pretraining strategy one step further than is possible
with multilayer perceptrons. Instead of training an entire convolutional layer at a
time, we can train a model of a small patch, as ( ) do with
Coates et al. 2011 k-means.
We can then use the parameters from this patch-based model to define the kernels
of a convolutional layer. This means that it is possible to use unsupervised learning
to train a convolutional network without ever using convolution during the training
process. Using this approach, we can train very large models and incur a high
computational cost only at inference time ( , ; ,
Ranzato et al. 2007b Jarrett et al.
2009 Kavukcuoglu 2010 Coates 2013
; et al., ; et al., ). This approach was popular
from roughly 2007–2013, when labeled datasets were small and computational
power was more limited. Today, most convolutional networks are trained in a
purely supervised fashion, using full forward and back-propagation through the
entire network on each training iteration.
As with other approaches to unsupervised pretraining, it remains difficult to
tease apart the cause of some of the benefits seen with this approach. Unsupervised
pretraining may offer some regularization relative to supervised training, or it may
simply allow us to train much larger architectures due to the reduced computational
cost of the learning rule.
9.10 The Neuroscientific Basis for Convolutional Net-
works
Convolutional networks are perhaps the greatest success story of biologically
inspired artificial intelligence. Though convolutional networks have been guided
by many other fields, some of the key design principles of neural networks were
drawn from neuroscience.
The history of convolutional networks begins with neuroscientific experiments
long before the relevant computational models were developed. Neurophysiologists
David Hubel and Torsten Wiesel collaborated for several years to determine many
of the most basic facts about how the mammalian vision system works (Hubel and
Wiesel 1959 1962 1968
, , , ). Their accomplishments were eventually recognized with
a Nobel prize. Their findings that have had the greatest influence on contemporary
deep learning models were based on recording the activity of individual neurons in
cats. They observed how neurons in the cat’s brain responded to images projected
in precise locations on a screen in front of the cat. Their great discovery was
that neurons in the early visual system responded most strongly to very specific
patterns of light, such as precisely oriented bars, but responded hardly at all to
other patterns.
364
CHAPTER 9. CONVOLUTIONAL NETWORKS
Their work helped to characterize many aspects of brain function that are
beyond the scope of this book. From the point of view of deep learning, we can
focus on a simplified, cartoon view of brain function.
In this simplified view, we focus on a part of the brain called V1, also known
as the primary visual cortex. V1 is the first area of the brain that begins to
perform significantly advanced processing of visual input. In this cartoon view,
images are formed by light arriving in the eye and stimulating the retina, the
light-sensitive tissue in the back of the eye. The neurons in the retina perform
some simple preprocessing of the image but do not substantially alter the way it is
represented. The image then passes through the optic nerve and a brain region
called the lateral geniculate nucleus. The main role, as far as we are concerned
here, of both of these anatomical regions is primarily just to carry the signal from
the eye to V1, which is located at the back of the head.
A convolutional network layer is designed to capture three properties of V1:
1. V1 is arranged in a spatial map. It actually has a two-dimensional structure
mirroring the structure of the image in the retina. For example, light
arriving at the lower half of the retina affects only the corresponding half of
V1. Convolutional networks capture this property by having their features
defined in terms of two dimensional maps.
2. V1 contains many simple cells. A simple cell’s activity can to some extent
be characterized by a linear function of the image in a small, spatially
localized receptive field. The detector units of a convolutional network are
designed to emulate these properties of simple cells.
3. V1 also contains many complex cells. These cells respond to features that
are similar to those detected by simple cells, but complex cells are invariant
to small shifts in the position of the feature. This inspires the pooling units
of convolutional networks. Complex cells are also invariant to some changes
in lighting that cannot be captured simply by pooling over spatial locations.
These invariances have inspired some of the cross-channel pooling strategies
in convolutional networks, such as maxout units ( , ).
Goodfellow et al. 2013a
Though we know the most about V1, it is generally believed that the same
basic principles apply to other areas of the visual system. In our cartoon view of
the visual system, the basic strategy of detection followed by pooling is repeatedly
applied as we move deeper into the brain. As we pass through multiple anatomical
layers of the brain, we eventually find cells that respond to some specific concept
and are invariant to many transformations of the input. These cells have been
365
CHAPTER 9. CONVOLUTIONAL NETWORKS
nicknamed “grandmother cells”—the idea is that a person could have a neuron that
activates when seeing an image of their grandmother, regardless of whether she
appears in the left or right side of the image, whether the image is a close-up of
her face or zoomed out shot of her entire body, whether she is brightly lit, or in
shadow, etc.
These grandmother cells have been shown to actually exist in the human brain,
in a region called the medial temporal lobe ( , ). Researchers
Quiroga et al. 2005
tested whether individual neurons would respond to photos of famous individuals.
They found what has come to be called the “Halle Berry neuron”: an individual
neuron that is activated by the concept of Halle Berry. This neuron fires when a
person sees a photo of Halle Berry, a drawing of Halle Berry, or even text containing
the words “Halle Berry.” Of course, this has nothing to do with Halle Berry herself;
other neurons responded to the presence of Bill Clinton, Jennifer Aniston, etc.
These medial temporal lobe neurons are somewhat more general than modern
convolutional networks, which would not automatically generalize to identifying
a person or object when reading its name. The closest analog to a convolutional
network’s last layer of features is a brain area called the inferotemporal cortex
(IT). When viewing an object, information flows from the retina, through the
LGN, to V1, then onward to V2, then V4, then IT. This happens within the first
100ms of glimpsing an object. If a person is allowed to continue looking at the
object for more time, then information will begin to flow backwards as the brain
uses top-down feedback to update the activations in the lower level brain areas.
However, if we interrupt the person’s gaze, and observe only the firing rates that
result from the first 100ms of mostly feedforward activation, then IT proves to be
very similar to a convolutional network. Convolutional networks can predict IT
firing rates, and also perform very similarly to (time limited) humans on object
recognition tasks ( , ).
DiCarlo 2013
That being said, there are many differences between convolutional networks
and the mammalian vision system. Some of these differences are well known
to computational neuroscientists, but outside the scope of this book. Some of
these differences are not yet known, because many basic questions about how the
mammalian vision system works remain unanswered. As a brief list:
• The human eye is mostly very low resolution, except for a tiny patch called the
fovea. The fovea only observes an area about the size of a thumbnail held at
arms length. Though we feel as if we can see an entire scene in high resolution,
this is an illusion created by the subconscious part of our brain, as it stitches
together several glimpses of small areas. Most convolutional networks actually
receive large full resolution photographs as input. The human brain makes
366
CHAPTER 9. CONVOLUTIONAL NETWORKS
several eye movements called saccades to glimpse the most visually salient
or task-relevant parts of a scene. Incorporating similar attention mechanisms
into deep learning models is an active research direction. In the context of
deep learning, attention mechanisms have been most successful for natural
language processing, as described in section . Several visual models
12.4.5.1
with foveation mechanisms have been developed but so far have not become
the dominant approach (Larochelle and Hinton 2010 Denil 2012
, ; et al., ).
• The human visual system is integrated with many other senses, such as
hearing, and factors like our moods and thoughts. Convolutional networks
so far are purely visual.
• The human visual system does much more than just recognize objects. It is
able to understand entire scenes including many objects and relationships
between objects, and processes rich 3-D geometric information needed for
our bodies to interface with the world. Convolutional networks have been
applied to some of these problems but these applications are in their infancy.
• Even simple brain areas like V1 are heavily impacted by feedback from higher
levels. Feedback has been explored extensively in neural network models but
has not yet been shown to offer a compelling improvement.
• While feedforward IT firing rates capture much of the same information as
convolutional network features, it is not clear how similar the intermediate
computations are. The brain probably uses very different activation and
pooling functions. An individual neuron’s activation probably is not well-
characterized by a single linear filter response. A recent model of V1 involves
multiple quadratic filters for each neuron ( , ). Indeed our
Rust et al. 2005
cartoon picture of “simple cells” and “complex cells” might create a non-
existent distinction; simple cells and complex cells might both be the same
kind of cell but with their “parameters” enabling a continuum of behaviors
ranging from what we call “simple” to what we call “complex.”
It is also worth mentioning that neuroscience has told us relatively little
about how to train convolutional networks. Model structures with parameter
sharing across multiple spatial locations date back to early connectionist models
of vision ( , ), but these models did not use the modern
Marr and Poggio 1976
back-propagation algorithm and gradient descent. For example, the Neocognitron
(Fukushima 1980
, ) incorporated most of the model architecture design elements of
the modern convolutional network but relied on a layer-wise unsupervised clustering
algorithm.
367
CHAPTER 9. CONVOLUTIONAL NETWORKS
Lang and Hinton 1988
( ) introduced the use of back-propagation to train
time-delay neural networks (TDNNs). To use contemporary terminology,
TDNNs are one-dimensional convolutional networks applied to time series. Back-
propagation applied to these models was not inspired by any neuroscientific observa-
tion and is considered by some to be biologically implausible. Following the success
of back-propagation-based training of TDNNs, ( , ) developed
LeCun et al. 1989
the modern convolutional network by applying the same training algorithm to 2-D
convolution applied to images.
So far we have described how simple cells are roughly linear and selective for
certain features, complex cells are more nonlinear and become invariant to some
transformations of these simple cell features, and stacks of layers that alternate
between selectivity and invariance can yield grandmother cells for very specific
phenomena. We have not yet described precisely what these individual cells detect.
In a deep, nonlinear network, it can be difficult to understand the function of
individual cells. Simple cells in the first layer are easier to analyze, because their
responses are driven by a linear function. In an artificial neural network, we can
just display an image of the convolution kernel to see what the corresponding
channel of a convolutional layer responds to. In a biological neural network, we
do not have access to the weights themselves. Instead, we put an electrode in the
neuron itself, display several samples of white noise images in front of the animal’s
retina, and record how each of these samples causes the neuron to activate. We can
then fit a linear model to these responses in order to obtain an approximation of
the neuron’s weights. This approach is known as reverse correlation (Ringach
and Shapley 2004
, ).
Reverse correlation shows us that most V1 cells have weights that are described
by Gabor functions. The Gabor function describes the weight at a 2-D point
in the image. We can think of an image as being a function of 2-D coordinates,
I(x, y). Likewise, we can think of a simple cell as sampling the image at a set of
locations, defined by a set of x coordinates X and a set of y coordinates, Y, and
applying weights that are also a function of the location, w(x, y). From this point
of view, the response of a simple cell to an image is given by
s I
( ) =

x∈X

y∈Y
w x,y I x, y .
( ) ( ) (9.15)
Specifically, takes the form of a Gabor function:
w x,y
( )
w x,y α, β
( ; x, βy, f,φ, x0, y0, τ α
) = exp

−βxx2
− βyy2
cos(fx
+ )
φ , (9.16)
where
x
= (x x
− 0) cos( ) + (
τ y y
− 0) sin( )
τ (9.17)
368
CHAPTER 9. CONVOLUTIONAL NETWORKS
and
y
= (
− x x
− 0) sin( ) + (
τ y y
− 0) cos( )
τ . (9.18)
Here, α, βx, βy, f, φ, x0, y0, and τ are parameters that control the properties
of the Gabor function. Figure shows some examples of Gabor functions with
9.18
different settings of these parameters.
The parameters x0, y0, and τ define a coordinate system. We translate and
rotate x and y to form x
and y
. Specifically, the simple cell will respond to image
features centered at the point (x0, y0), and it will respond to changes in brightness
as we move along a line rotated radians from the horizontal.
τ
Viewed as a function of x and y , the function w then responds to changes in
brightness as we move along the x axis. It has two important factors: one is a
Gaussian function and the other is a cosine function.
The Gaussian factor α exp

−βx x2 − βyy2

can be seen as a gating term that
ensures the simple cell will only respond to values near where x and y are both
zero, in other words, near the center of the cell’s receptive field. The scaling factor
α adjusts the total magnitude of the simple cell’s response, while βx and βy control
how quickly its receptive field falls off.
The cosine factor cos(fx +φ) controls how the simple cell responds to changing
brightness along the x axis. The parameter f controls the frequency of the cosine
and controls its phase offset.
φ
Altogether, this cartoon view of simple cells means that a simple cell responds
to a specific spatial frequency of brightness in a specific direction at a specific
location. Simple cells are most excited when the wave of brightness in the image
has the same phase as the weights. This occurs when the image is bright where the
weights are positive and dark where the weights are negative. Simple cells are most
inhibited when the wave of brightness is fully out of phase with the weights—when
the image is dark where the weights are positive and bright where the weights are
negative.
The cartoon view of a complex cell is that it computes the L2
norm of the
2-D vector containing two simple cells’ responses: c( I) =

s0( )
I 2 + s1( )
I 2. An
important special case occurs when s1 has all of the same parameters as s0 except
for φ, and φ is set such that s1 is one quarter cycle out of phase with s0. In this
case, s0 and s1 form a quadrature pair. A complex cell defined in this way
responds when the Gaussian reweighted image I(x, y) exp(−βx x2 −βyy2) contains
a high amplitude sinusoidal wave with frequency f in direction τ near (x0, y0),
regardless of the phase offset of this wave. In other words, the complex cell is
invariant to small translations of the image in direction τ , or to negating the image
369
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.18: Gabor functions with a variety of parameter settings. White indicates
large positive weight, black indicates large negative weight, and the background gray
corresponds to zero weight. (Left)Gabor functions with different values of the parameters
that control the coordinate system: x0, y0, and τ. Each Gabor function in this grid is
assigned a value of x0 and y0 proportional to its position in its grid, and τ is chosen so
that each Gabor filter is sensitive to the direction radiating out from the center of the grid.
For the other two plots, x0, y0, and τ are fixed to zero. Gabor functions with
(Center)
different Gaussian scale parameters βx and βy . Gabor functions are arranged in increasing
width (decreasing βx) as we move left to right through the grid, and increasing height
(decreasing βy) as we move top to bottom. For the other two plots, the β values are fixed
to 1.5× the image width. Gabor functions with different sinusoid parameters
(Right) f
and φ. As we move top to bottom, f increases, and as we move left to right, φ increases.
For the other two plots, is fixed to 0 and is fixed to 5 the image width.
φ f ×
(replacing black with white and vice versa).
Some of the most striking correspondences between neuroscience and machine
learning come from visually comparing the features learned by machine learning
models with those employed by V1. ( ) showed that
Olshausen and Field 1996
a simple unsupervised learning algorithm, sparse coding, learns features with
receptive fields similar to those of simple cells. Since then, we have found that
an extremely wide variety of statistical learning algorithms learn features with
Gabor-like functions when applied to natural images. This includes most deep
learning algorithms, which learn these features in their first layer. Figure 9.19
shows some examples. Because so many different learning algorithms learn edge
detectors, it is difficult to conclude that any specific learning algorithm is the
“right” model of the brain just based on the features that it learns (though it can
certainly be a bad sign if an algorithm does learn some sort of edge detector
not
when applied to natural images). These features are an important part of the
statistical structure of natural images and can be recovered by many different
approaches to statistical modeling. See Hyvärinen 2009
et al. ( ) for a review of the
field of natural image statistics.
370
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.19: Many machine learning algorithms learn features that detect edges or specific
colors of edges when applied to natural images. These feature detectors are reminiscent of
the Gabor functions known to be present in primary visual cortex. (Left)Weights learned
by an unsupervised learning algorithm (spike and slab sparse coding) applied to small
image patches. (Right)Convolution kernels learned by the first layer of a fully supervised
convolutional maxout network. Neighboring pairs of filters drive the same maxout unit.
9.11 Convolutional Networks and the History of Deep
Learning
Convolutional networks have played an important role in the history of deep
learning. They are a key example of a successful application of insights obtained
by studying the brain to machine learning applications. They were also some of
the first deep models to perform well, long before arbitrary deep models were
considered viable. Convolutional networks were also some of the first neural
networks to solve important commercial applications and remain at the forefront
of commercial applications of deep learning today. For example, in the 1990s, the
neural network research group at AT&T developed a convolutional network for
reading checks ( , ). By the end of the 1990s, this system deployed
LeCun et al. 1998b
by NEC was reading over 10% of all the checks in the US. Later, several OCR
and handwriting recognition systems based on convolutional nets were deployed by
Microsoft ( , ). See chapter for more details on such applications
Simard et al. 2003 12
and more modern applications of convolutional networks. See ( )
LeCun et al. 2010
for a more in-depth history of convolutional networks up to 2010.
Convolutional networks were also used to win many contests. The current
intensity of commercial interest in deep learning began when Krizhevsky et al.
( ) won the ImageNet object recognition challenge, but convolutional networks
2012
371
CHAPTER 9. CONVOLUTIONAL NETWORKS
had been used to win other machine learning and computer vision contests with
less impact for years earlier.
Convolutional nets were some of the first working deep networks trained with
back-propagation. It is not entirely clear why convolutional networks succeeded
when general back-propagation networks were considered to have failed. It may
simply be that convolutional networks were more computationally efficient than
fully connected networks, so it was easier to run multiple experiments with them
and tune their implementation and hyperparameters. Larger networks also seem
to be easier to train. With modern hardware, large fully connected networks
appear to perform reasonably on many tasks, even when using datasets that were
available and activation functions that were popular during the times when fully
connected networks were believed not to work well. It may be that the primary
barriers to the success of neural networks were psychological (practitioners did
not expect neural networks to work, so they did not make a serious effort to use
neural networks). Whatever the case, it is fortunate that convolutional networks
performed well decades ago. In many ways, they carried the torch for the rest of
deep learning and paved the way to the acceptance of neural networks in general.
Convolutional networks provide a way to specialize neural networks to work
with data that has a clear grid-structured topology and to scale such models to
very large size. This approach has been the most successful on a two-dimensional,
image topology. To process one-dimensional, sequential data, we turn next to
another powerful specialization of the neural networks framework: recurrent neural
networks.
372
Chapter 10
Sequence Modeling: Recurrent
and Recursive Nets
Recurrent neural networks or RNNs ( , ) are a family of
Rumelhart et al. 1986a
neural networks for processing sequential data. Much as a convolutional network
is a neural network that is specialized for processing a grid of values X such as
an image, a recurrent neural network is a neural network that is specialized for
processing a sequence of values x(1)
, . . . , x( )
τ
. Just as convolutional networks
can readily scale to images with large width and height, and some convolutional
networks can process images of variable size, recurrent networks can scale to much
longer sequences than would be practical for networks without sequence-based
specialization. Most recurrent networks can also process sequences of variable
length.
To go from multi-layer networks to recurrent networks, we need to take advan-
tage of one of the early ideas found in machine learning and statistical models of
the 1980s: sharing parameters across different parts of a model. Parameter sharing
makes it possible to extend and apply the model to examples of different forms
(different lengths, here) and generalize across them. If we had separate parameters
for each value of the time index, we could not generalize to sequence lengths not
seen during training, nor share statistical strength across different sequence lengths
and across different positions in time. Such sharing is particularly important when
a specific piece of information can occur at multiple positions within the sequence.
For example, consider the two sentences “I went to Nepal in 2009” and “In 2009,
I went to Nepal.” If we ask a machine learning model to read each sentence and
extract the year in which the narrator went to Nepal, we would like it to recognize
the year 2009 as the relevant piece of information, whether it appears in the sixth
373
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
word or the second word of the sentence. Suppose that we trained a feedforward
network that processes sentences of fixed length. A traditional fully connected
feedforward network would have separate parameters for each input feature, so it
would need to learn all of the rules of the language separately at each position in
the sentence. By comparison, a recurrent neural network shares the same weights
across several time steps.
A related idea is the use of convolution across a 1-D temporal sequence. This
convolutional approach is the basis for time-delay neural networks (Lang and
Hinton 1988 Waibel 1989 Lang 1990
, ; et al., ; et al., ). The convolution operation
allows a network to share parameters across time, but is shallow. The output
of convolution is a sequence where each member of the output is a function of
a small number of neighboring members of the input. The idea of parameter
sharing manifests in the application of the same convolution kernel at each time
step. Recurrent networks share parameters in a different way. Each member of the
output is a function of the previous members of the output. Each member of the
output is produced using the same update rule applied to the previous outputs.
This recurrent formulation results in the sharing of parameters through a very
deep computational graph.
For the simplicity of exposition, we refer to RNNs as operating on a sequence
that contains vectors x( )
t with the time step index t ranging from to
1 τ. In
practice, recurrent networks usually operate on minibatches of such sequences,
with a different sequence length τ for each member of the minibatch. We have
omitted the minibatch indices to simplify notation. Moreover, the time step index
need not literally refer to the passage of time in the real world. Sometimes it refers
only to the position in the sequence. RNNs may also be applied in two dimensions
across spatial data such as images, and even when applied to data involving time,
the network may have connections that go backwards in time, provided that the
entire sequence is observed before it is provided to the network.
This chapter extends the idea of a computational graph to include cycles. These
cycles represent the influence of the present value of a variable on its own value
at a future time step. Such computational graphs allow us to define recurrent
neural networks. We then describe many different ways to construct, train, and
use recurrent neural networks.
For more information on recurrent neural networks than is available in this
chapter, we refer the reader to the textbook of Graves 2012
( ).
374
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.1 Unfolding Computational Graphs
A computational graph is a way to formalize the structure of a set of computations,
such as those involved in mapping inputs and parameters to outputs and loss.
Please refer to section for a general introduction. In this section we explain
6.5.1
the idea of unfolding a recursive or recurrent computation into a computational
graph that has a repetitive structure, typically corresponding to a chain of events.
Unfolding this graph results in the sharing of parameters across a deep network
structure.
For example, consider the classical form of a dynamical system:
s( )
t
= (
f s( 1)
t−
; )
θ , (10.1)
where s( )
t is called the state of the system.
Equation is recurrent because the definition of
10.1 s at time t refers back to
the same definition at time .
t − 1
For a finite number of time steps τ, the graph can be unfolded by applying
the definition τ − 1 times. For example, if we unfold equation for
10.1 τ = 3 time
steps, we obtain
s(3)
= (
f s(2)
; )
θ (10.2)
= ( (
f f s(1)
; ); )
θ θ (10.3)
Unfolding the equation by repeatedly applying the definition in this way has
yielded an expression that does not involve recurrence. Such an expression can
now be represented by a traditional directed acyclic computational graph. The
unfolded computational graph of equation and equation is illustrated in
10.1 10.3
figure .
10.1
s(t−1)
s(t−1)
s( )
t
s( )
t
s( +1)
t
s( +1)
t
f
f
s( )
...
s( )
...
s( )
...
s( )
...
f
f f
f f
f
Figure 10.1: The classical dynamical system described by equation , illustrated as an
10.1
unfolded computational graph. Each node represents the state at some timet and the
function f maps the state at t to the state at t + 1. The same parameters (the same value
of used to parametrize ) are used for all time steps.
θ f
As another example, let us consider a dynamical system driven by an external
signal x( )
t
,
s( )
t
= (
f s( 1)
t−
, x( )
t
; )
θ , (10.4)
375
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
where we see that the state now contains information about the whole past sequence.
Recurrent neural networks can be built in many different ways. Much as
almost any function can be considered a feedforward neural network, essentially
any function involving recurrence can be considered a recurrent neural network.
Many recurrent neural networks use equation or a similar equation to
10.5
define the values of their hidden units. To indicate that the state is the hidden
units of the network, we now rewrite equation using the variable
10.4 h to represent
the state:
h( )
t
= (
f h( 1)
t−
, x( )
t
; )
θ , (10.5)
illustrated in figure , typical RNNs will add extra architectural features such
10.2
as output layers that read information out of the state to make predictions.
h
When the recurrent network is trained to perform a task that requires predicting
the future from the past, the network typically learns to use h( )
t as a kind of lossy
summary of the task-relevant aspects of the past sequence of inputs up to t. This
summary is in general necessarily lossy, since it maps an arbitrary length sequence
(x( )
t
, x( 1)
t−
, x( 2)
t−
, . . . , x(2)
, x(1)
) to a fixed length vector h( )
t
. Depending on the
training criterion, this summary might selectively keep some aspects of the past
sequence with more precision than other aspects. For example, if the RNN is used
in statistical language modeling, typically to predict the next word given previous
words, it may not be necessary to store all of the information in the input sequence
up to time t, but rather only enough information to predict the rest of the sentence.
The most demanding situation is when we ask h( )
t to be rich enough to allow
one to approximately recover the input sequence, as in autoencoder frameworks
(chapter ).
14
f
f
h
h
x
x
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
x(t−1)
x(t−1)
x( )
t
x( )
t
x( +1)
t
x( +1)
t
h( )
...
h( )
...
h( )
...
h( )
...
f
f
Unfold
f
f f
f f
Figure 10.2: A recurrent network with no outputs. This recurrent network just processes
information from the input x by incorporating it into the state h that is passed forward
through time. (Left)Circuit diagram. The black square indicates a delay of a single time
step. The same network seen as an unfolded computational graph, where each
(Right)
node is now associated with one particular time instance.
Equation can be drawn in two different ways. One way to draw the RNN
10.5
is with a diagram containing one node for every component that might exist in a
376
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
physical implementation of the model, such as a biological neural network. In this
view, the network defines a circuit that operates in real time, with physical parts
whose current state can influence their future state, as in the left of figure .
10.2
Throughout this chapter, we use a black square in a circuit diagram to indicate
that an interaction takes place with a delay of a single time step, from the state
at time t to the state at time t + 1. The other way to draw the RNN is as an
unfolded computational graph, in which each component is represented by many
different variables, with one variable per time step, representing the state of the
component at that point in time. Each variable for each time step is drawn as a
separate node of the computational graph, as in the right of figure . What we
10.2
call unfolding is the operation that maps a circuit as in the left side of the figure
to a computational graph with repeated pieces as in the right side. The unfolded
graph now has a size that depends on the sequence length.
We can represent the unfolded recurrence after steps with a function
t g( )
t :
h( )
t
=g( )
t
(x( )
t
, x( 1)
t−
, x( 2)
t−
, . . . , x(2)
, x(1)
) (10.6)
= (
f h( 1)
t−
, x( )
t
; )
θ (10.7)
The function g( )
t takes the whole past sequence (x( )
t , x( 1)
t− , x( 2)
t− , . . . , x(2), x(1))
as input and produces the current state, but the unfolded recurrent structure
allows us to factorize g( )
t into repeated application of a function f. The unfolding
process thus introduces two major advantages:
1. Regardless of the sequence length, the learned model always has the same
input size, because it is specified in terms of transition from one state to
another state, rather than specified in terms of a variable-length history of
states.
2. It is possible to use the transition function
same f with the same parameters
at every time step.
These two factors make it possible to learn a single model f that operates on
all time steps and all sequence lengths, rather than needing to learn a separate
model g( )
t
for all possible time steps. Learning a single, shared model allows
generalization to sequence lengths that did not appear in the training set, and
allows the model to be estimated with far fewer training examples than would be
required without parameter sharing.
Both the recurrent graph and the unrolled graph have their uses. The recurrent
graph is succinct. The unfolded graph provides an explicit description of which
computations to perform. The unfolded graph also helps to illustrate the idea of
377
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
information flow forward in time (computing outputs and losses) and backward
in time (computing gradients) by explicitly showing the path along which this
information flows.
10.2 Recurrent Neural Networks
Armed with the graph unrolling and parameter sharing ideas of section , we
10.1
can design a wide variety of recurrent neural networks.
U
U
V
V
W
W
o(t−1)
o(t−1)
h
h
o
o
y
y
L
L
x
x
o( )
t
o( )
t
o( +1)
t
o( +1)
t
L(t−1)
L(t−1)
L( )
t
L( )
t
L( +1)
t
L( +1)
t
y(t−1)
y(t−1)
y( )
t
y( )
t
y( +1)
t
y( +1)
t
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
x(t−1)
x(t−1)
x( )
t
x( )
t
x( +1)
t
x( +1)
t
W
W
W
W W
W W
W
h( )
...
h( )
...
h( )
...
h( )
...
V
V V
V V
V
U
U U
U U
U
Unfold
Figure 10.3: The computational graph to compute the training loss of a recurrent network
that maps an input sequence of x values to a corresponding sequence of output o values.
A loss L measures how far each o is from the corresponding training targety . When using
softmax outputs, we assume o is the unnormalized log probabilities. The lossL internally
computes ŷ = softmax(o) and compares this to the target y. The RNN has input to hidden
connections parametrized by a weight matrix U, hidden-to-hidden recurrent connections
parametrized by a weight matrix W , and hidden-to-output connections parametrized by
a weight matrix V . Equation defines forward propagation in this model.
10.8 (Left)The
RNN and its loss drawn with recurrent connections. (Right)The same seen as an time-
unfolded computational graph, where each node is now associated with one particular
time instance.
Some examples of important design patterns for recurrent neural networks
include the following:
378
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
• Recurrent networks that produce an output at each time step and have
recurrent connections between hidden units, illustrated in figure .
10.3
• Recurrent networks that produce an output at each time step and have
recurrent connections only from the output at one time step to the hidden
units at the next time step, illustrated in figure 10.4
• Recurrent networks with recurrent connections between hidden units, that
read an entire sequence and then produce a single output, illustrated in
figure .
10.5
figure is a reasonably representative example that we return to throughout
10.3
most of the chapter.
The recurrent neural network of figure and equation is universal in the
10.3 10.8
sense that any function computable by a Turing machine can be computed by such
a recurrent network of a finite size. The output can be read from the RNN after
a number of time steps that is asymptotically linear in the number of time steps
used by the Turing machine and asymptotically linear in the length of the input
(Siegelmann and Sontag 1991 Siegelmann 1995 Siegelmann and Sontag 1995
, ; , ; , ;
Hyotyniemi 1996
, ). The functions computable by a Turing machine are discrete,
so these results regard exact implementation of the function, not approximations.
The RNN, when used as a Turing machine, takes a binary sequence as input and its
outputs must be discretized to provide a binary output. It is possible to compute all
functions in this setting using a single specific RNN of finite size (Siegelmann and
Sontag 1995
( ) use 886 units). The “input” of the Turing machine is a specification
of the function to be computed, so the same network that simulates this Turing
machine is sufficient for all problems. The theoretical RNN used for the proof
can simulate an unbounded stack by representing its activations and weights with
rational numbers of unbounded precision.
We now develop the forward propagation equations for the RNN depicted in
figure . The figure does not specify the choice of activation function for the
10.3
hidden units. Here we assume the hyperbolic tangent activation function. Also,
the figure does not specify exactly what form the output and loss function take.
Here we assume that the output is discrete, as if the RNN is used to predict words
or characters. A natural way to represent discrete variables is to regard the output
o as giving the unnormalized log probabilities of each possible value of the discrete
variable. We can then apply the softmax operation as a post-processing step to
obtain a vector ŷ of normalized probabilities over the output. Forward propagation
begins with a specification of the initial state h(0). Then, for each time step from
379
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
U
V
W
o(t−1)
o(t−1)
h
h
o
o
y
y
L
L
x
x
o( )
t
o( )
t
o( +1)
t
o( +1)
t
L(t−1)
L(t−1)
L( )
t
L( )
t
L( +1)
t
L( +1)
t
y(t−1)
y(t−1)
y( )
t
y( )
t
y( +1)
t
y( +1)
t
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
x(t−1)
x(t−1)
x( )
t
x( )
t
x( +1)
t
x( +1)
t
W
W W W
o( )
...
o( )
...
h( )
...
h( )
...
V V V
U U U
Unfold
Figure 10.4: An RNN whose only recurrence is the feedback connection from the output
to the hidden layer. At each time step t, the input is xt, the hidden layer activations are
h( )
t
, the outputs are o( )
t
, the targets are y( )
t
and the loss is L( )
t
. (Left)Circuit diagram.
(Right)Unfolded computational graph. Such an RNN is less powerful (can express a
smaller set of functions) than those in the family represented by figure . The RNN
10.3
in figure can choose to put any information it wants about the past into its hidden
10.3
representation h and transmit h to the future. The RNN in this figure is trained to
put a specific output value into o, and o is the only information it is allowed to send
to the future. There are no direct connections from h going forward. The previous h
is connected to the present only indirectly, via the predictions it was used to produce.
Unless o is very high-dimensional and rich, it will usually lack important information
from the past. This makes the RNN in this figure less powerful, but it may be easier to
train because each time step can be trained in isolation from the others, allowing greater
parallelization during training, as described in section .
10.2.1
380
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
t t τ
= 1 to = , we apply the following update equations:
a( )
t
= +
b W h( 1)
t−
+ Ux( )
t
(10.8)
h( )
t
= tanh(a( )
t
) (10.9)
o( )
t
= +
c V h( )
t
(10.10)
ŷ( )
t
= softmax(o( )
t
) (10.11)
where the parameters are the bias vectors b and c along with the weight matrices
U, V and W , respectively for input-to-hidden, hidden-to-output and hidden-to-
hidden connections. This is an example of a recurrent network that maps an
input sequence to an output sequence of the same length. The total loss for a
given sequence of values paired with a sequence of values would then be just
x y
the sum of the losses over all the time steps. For example, if L( )
t is the negative
log-likelihood of y( )
t given x(1), . . . , x( )
t , then
L

{x(1)
, . . . , x( )
τ } {
, y(1)
, . . . , y( )
τ }

(10.12)
=

t
L( )
t
(10.13)
= −

t
log pmodel

y( )
t
| {x(1)
, . . . , x( )
t
}

, (10.14)
where pmodel

y( )
t | {x(1)
, . . . , x( )
t }

is given by reading the entry for y( )
t
from the
model’s output vectorŷ( )
t . Computing the gradient of this loss function with respect
to the parameters is an expensive operation. The gradient computation involves
performing a forward propagation pass moving left to right through our illustration
of the unrolled graph in figure , followed by a backward propagation pass
10.3
moving right to left through the graph. The runtime is O(τ ) and cannot be reduced
by parallelization because the forward propagation graph is inherently sequential;
each time step may only be computed after the previous one. States computed
in the forward pass must be stored until they are reused during the backward
pass, so the memory cost is also O(τ). The back-propagation algorithm applied
to the unrolled graph with O(τ) cost is called back-propagation through time
or BPTT and is discussed further in section . The network with recurrence
10.2.2
between hidden units is thus very powerful but also expensive to train. Is there an
alternative?
10.2.1 Teacher Forcing and Networks with Output Recurrence
The network with recurrent connections only from the output at one time step to
the hidden units at the next time step (shown in figure ) is strictly less powerful
10.4
381
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
because it lacks hidden-to-hidden recurrent connections. For example, it cannot
simulate a universal Turing machine. Because this network lacks hidden-to-hidden
recurrence, it requires that the output units capture all of the information about
the past that the network will use to predict the future. Because the output units
are explicitly trained to match the training set targets, they are unlikely to capture
the necessary information about the past history of the input, unless the user
knows how to describe the full state of the system and provides it as part of the
training set targets. The advantage of eliminating hidden-to-hidden recurrence
is that, for any loss function based on comparing the prediction at time t to the
training target at time t, all the time steps are decoupled. Training can thus be
parallelized, with the gradient for each step t computed in isolation. There is no
need to compute the output for the previous time step first, because the training
set provides the ideal value of that output.
h(t−1)
h(t−1)
W
h( )
t
h( )
t . . .
. . .
x(t−1)
x(t−1)
x( )
t
x( )
t
x( )
...
x( )
...
W W
U U U
h( )
τ
h( )
τ
x( )
τ
x( )
τ
W
U
o( )
τ
o( )
τ
y( )
τ
y( )
τ
L( )
τ
L( )
τ
V
. . .
. . .
Figure 10.5: Time-unfolded recurrent neural network with a single output at the end
of the sequence. Such a network can be used to summarize a sequence and produce a
fixed-size representation used as input for further processing. There might be a target
right at the end (as depicted here) or the gradient on the output o( )
t can be obtained by
back-propagating from further downstream modules.
Models that have recurrent connections from their outputs leading back into
the model may be trained with teacher forcing. Teacher forcing is a procedure
that emerges from the maximum likelihood criterion, in which during training the
model receives the ground truth output y( )
t
as input at time t + 1. We can see
this by examining a sequence with two time steps. The conditional maximum
382
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
o(t−1)
o(t−1)
o( )
t
o( )
t
h(t−1)
h(t−1)
h( )
t
h( )
t
x(t−1)
x(t−1)
x( )
t
x( )
t
W
V V
U U
o(t−1)
o(t−1)
o( )
t
o( )
t
L(t−1)
L(t−1)
L( )
t
L( )
t
y(t−1)
y(t−1)
y( )
t
y( )
t
h(t−1)
h(t−1)
h( )
t
h( )
t
x(t−1)
x(t−1)
x( )
t
x( )
t
W
V V
U U
Train time Test time
Figure 10.6: Illustration of teacher forcing. Teacher forcing is a training technique that is
applicable to RNNs that have connections from their output to their hidden states at the
next time step. (Left)At train time, we feed the correct outputy ( )
t drawn from the train
set as input to h( +1)
t . When the model is deployed, the true output is generally
(Right)
not known. In this case, we approximate the correct output y( )
t
with the model’s output
o( )
t
, and feed the output back into the model.
383
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
likelihood criterion is
log p

y(1)
, y(2)
| x(1)
, x(2)

(10.15)
= log p

y(2)
| y(1)
, x(1)
, x(2)

+ log p

y(1)
| x(1)
, x(2)

(10.16)
In this example, we see that at time t = 2, the model is trained to maximize the
conditional probability of y(2) given both the x sequence so far and the previous y
value from the training set. Maximum likelihood thus specifies that during training,
rather than feeding the model’s own output back into itself, these connections
should be fed with the target values specifying what the correct output should be.
This is illustrated in figure .
10.6
We originally motivated teacher forcing as allowing us to avoid back-propagation
through time in models that lack hidden-to-hidden connections. Teacher forcing
may still be applied to models that have hidden-to-hidden connections so long as
they have connections from the output at one time step to values computed in the
next time step. However, as soon as the hidden units become a function of earlier
time steps, the BPTT algorithm is necessary. Some models may thus be trained
with both teacher forcing and BPTT.
The disadvantage of strict teacher forcing arises if the network is going to be
later used in an open-loop mode, with the network outputs (or samples from
the output distribution) fed back as input. In this case, the kind of inputs that
the network sees during training could be quite different from the kind of inputs
that it will see at test time. One way to mitigate this problem is to train with
both teacher-forced inputs and with free-running inputs, for example by predicting
the correct target a number of steps in the future through the unfolded recurrent
output-to-input paths. In this way, the network can learn to take into account
input conditions (such as those it generates itself in the free-running mode) not
seen during training and how to map the state back towards one that will make
the network generate proper outputs after a few steps. Another approach (Bengio
et al., ) to mitigate the gap between the inputs seen at train time and the
2015b
inputs seen at test time randomly chooses to use generated values or actual data
values as input. This approach exploits a curriculum learning strategy to gradually
use more of the generated values as input.
10.2.2 Computing the Gradient in a Recurrent Neural Network
Computing the gradient through a recurrent neural network is straightforward.
One simply applies the generalized back-propagation algorithm of section 6.5.6
384
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
to the unrolled computational graph. No specialized algorithms are necessary.
Gradients obtained by back-propagation may then be used with any general-purpose
gradient-based techniques to train an RNN.
To gain some intuition for how the BPTT algorithm behaves, we provide an
example of how to compute gradients by BPTT for the RNN equations above
(equation and equation ). The nodes of our computational graph include
10.8 10.12
the parameters U , V , W , b and c as well as the sequence of nodes indexed by
t for x( )
t
, h( )
t
, o( )
t
and L( )
t
. For each node N we need to compute the gradient
∇NL recursively, based on the gradient computed at nodes that follow it in the
graph. We start the recursion with the nodes immediately preceding the final loss
∂L
∂L( )
t
= 1. (10.17)
In this derivation we assume that the outputs o( )
t are used as the argument to the
softmax function to obtain the vector ŷ of probabilities over the output. We also
assume that the loss is the negative log-likelihood of the true target y( )
t given the
input so far. The gradient ∇o( )
t L on the outputs at time step t, for all i, t, is as
follows:
(∇o( )
t L)i =
∂L
∂o
( )
t
i
=
∂L
∂L( )
t
∂L( )
t
∂o
( )
t
i
= ŷ
( )
t
i − 1i,y( )
t . (10.18)
We work our way backwards, starting from the end of the sequence. At the final
time step ,
τ h( )
τ only has o( )
τ as a descendent, so its gradient is simple:
∇h( )
τ L = V 
∇o( )
τ L. (10.19)
We can then iterate backwards in time to back-propagate gradients through time,
from t = τ − 1 down to t = 1, noting that h( )
t
(for t < τ) has as descendents both
o( )
t
and h( +1)
t
. Its gradient is thus given by
∇h( )
t L =

∂h( +1)
t
∂h( )
t

(∇h( +1)
t L) +

∂o( )
t
∂h( )
t

(∇o( )
t L) (10.20)
= W
(∇h( +1)
t L) diag

1 −

h( +1)
t
2

+ V 
(∇o( )
t L) (10.21)
where diag

1 −

h( +1)
t
2

indicates the diagonal matrix containing the elements
1 − (h
( +1)
t
i )2
. This is the Jacobian of the hyperbolic tangent associated with the
hidden unit at time .
i t + 1
385
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Once the gradients on the internal nodes of the computational graph are
obtained, we can obtain the gradients on the parameter nodes. Because the
parameters are shared across many time steps, we must take some care when
denoting calculus operations involving these variables. The equations we wish to
implement use the bprop method of section , that computes the contribution
6.5.6
of a single edge in the computational graph to the gradient. However, the ∇W f
operator used in calculus takes into account the contribution of W to the value
of f due to edges in the computational graph. To resolve this ambiguity, we
all
introduce dummy variables W( )
t that are defined to be copies of W but with each
W( )
t used only at time step t. We may then use ∇W( )
t to denote the contribution
of the weights at time step to the gradient.
t
Using this notation, the gradient on the remaining parameters is given by:
∇cL =

t

∂o( )
t
∂c

∇o( )
t L =

t
∇o( )
t L (10.22)
∇bL =

t

∂h( )
t
∂b( )
t

∇h( )
t L =

t
diag

1 −

h( )
t
2

∇h( )
t L(10.23)
∇V L =

t

i

∂L
∂o
( )
t
i

∇V o
( )
t
i =

t
(∇o( )
t L) h( )
t 
(10.24)
∇WL =

t

i

∂L
∂h
( )
t
i

∇W ( )
t h
( )
t
i (10.25)
=

t
diag

1 −

h( )
t
2

(∇h( )
t L) h( 1)
t− 
(10.26)
∇UL =

t

i

∂L
∂h
( )
t
i

∇U( )
t h
( )
t
i (10.27)
=

t
diag

1 −

h( )
t
2

(∇h( )
t L) x( )
t 
(10.28)
We do not need to compute the gradient with respect to x( )
t for training because
it does not have any parameters as ancestors in the computational graph defining
the loss.
386
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.2.3 Recurrent Networks as Directed Graphical Models
In the example recurrent network we have developed so far, the losses L( )
t
were
cross-entropies between training targetsy( )
t
and outputs o( )
t
. As with a feedforward
network, it is in principle possible to use almost any loss with a recurrent network.
The loss should be chosen based on the task. As with a feedforward network, we
usually wish to interpret the output of the RNN as a probability distribution, and
we usually use the cross-entropy associated with that distribution to define the loss.
Mean squared error is the cross-entropy loss associated with an output distribution
that is a unit Gaussian, for example, just as with a feedforward network.
When we use a predictive log-likelihood training objective, such as equa-
tion , we train the RNN to estimate the conditional distribution of the next
10.12
sequence element y( )
t
given the past inputs. This may mean that we maximize
the log-likelihood
log (
p y( )
t
| x(1)
, . . . , x( )
t
), (10.29)
or, if the model includes connections from the output at one time step to the next
time step,
log (
p y( )
t
| x(1)
, . . . , x( )
t
, y(1)
, . . . , y( 1)
t−
). (10.30)
Decomposing the joint probability over the sequence of y values as a series of
one-step probabilistic predictions is one way to capture the full joint distribution
across the whole sequence. When we do not feed past y values as inputs that
condition the next step prediction, the directed graphical model contains no edges
from any y( )
i in the past to the current y( )
t . In this case, the outputs y are
conditionally independent given the sequence of x values. When we do feed the
actual y values (not their prediction, but the actual observed or generated values)
back into the network, the directed graphical model contains edges from all y( )
i
values in the past to the current y( )
t value.
As a simple example, let us consider the case where the RNN models only a
sequence of scalar random variables Y = {y(1), . . . , y( )
τ }, with no additional inputs
x. The input at time step t is simply the output at time step t −1. The RNN then
defines a directed graphical model over the y variables. We parametrize the joint
distribution of these observations using the chain rule (equation ) for conditional
3.6
probabilities:
P P
( ) =
Y (y(1)
, . . . , y( )
τ
) =
τ

t=1
P(y( )
t
| y( 1)
t−
, y( 2)
t−
, . . . , y(1)
) (10.31)
where the right-hand side of the bar is empty for t = 1, of course. Hence the
negative log-likelihood of a set of values {y(1)
, . . . , y( )
τ } according to such a model
387
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
y (1)
y (1)
y (2)
y (2)
y(3)
y(3)
y(4)
y(4)
y(5)
y(5)
y( )
...
y( )
...
Figure 10.7: Fully connected graphical model for a sequencey(1)
,y(2)
,. .. , y( )
t
,. ..: every
past observation y( )
i may influence the conditional distribution of some y( )
t (for t > i),
given the previous values. Parametrizing the graphical model directly according to this
graph (as in equation ) might be very inefficient, with an ever growing number of
10.6
inputs and parameters for each element of the sequence. RNNs obtain the same full
connectivity but efficient parametrization, as illustrated in figure .
10.8
is
L =

t
L( )
t
(10.32)
where
L( )
t
= log (
− P y( )
t
= y( )
t
| y( 1)
t−
, y( 2)
t−
, . . . , y(1)
). (10.33)
y (1)
y (1)
y(2)
y(2)
y(3)
y(3)
y(4)
y(4)
y(5)
y(5)
y( )
...
y( )
...
h(1)
h(1)
h(2)
h(2)
h(3)
h(3)
h(4)
h(4)
h(5)
h(5)
h( )
...
h( )
...
Figure 10.8: Introducing the state variable in the graphical model of the RNN, even
though it is a deterministic function of its inputs, helps to see how we can obtain a very
efficient parametrization, based on equation . Every stage in the sequence (for
10.5 h( )
t
and y( )
t ) involves the same structure (the same number of inputs for each node) and can
share the same parameters with the other stages.
The edges in a graphical model indicate which variables depend directly on other
variables. Many graphical models aim to achieve statistical and computational
efficiency by omitting edges that do not correspond to strong interactions. For
388
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
example, it is common to make the Markov assumption that the graphical model
should only contain edges from {y( )
t k
−
, . . . , y( 1)
t−
} to y( )
t
, rather than containing
edges from the entire past history. However, in some cases, we believe that all past
inputs should have an influence on the next element of the sequence. RNNs are
useful when we believe that the distribution over y( )
t may depend on a value of y( )
i
from the distant past in a way that is not captured by the effect of y( )
i on y( 1)
t− .
One way to interpret an RNN as a graphical model is to view the RNN as
defining a graphical model whose structure is the complete graph, able to represent
direct dependencies between any pair of y values. The graphical model over the y
values with the complete graph structure is shown in figure . The complete
10.7
graph interpretation of the RNN is based on ignoring the hidden units h( )
t
by
marginalizing them out of the model.
It is more interesting to consider the graphical model structure of RNNs that
results from regarding the hidden units h( )
t
as random variables.1
Including the
hidden units in the graphical model reveals that the RNN provides a very efficient
parametrization of the joint distribution over the observations. Suppose that we
represented an arbitrary joint distribution over discrete values with a tabular
representation—an array containing a separate entry for each possible assignment
of values, with the value of that entry giving the probability of that assignment
occurring. If y can take on k different values, the tabular representation would
have O(kτ) parameters. By comparison, due to parameter sharing, the number of
parameters in the RNN is O(1) as a function of sequence length. The number of
parameters in the RNN may be adjusted to control model capacity but is not forced
to scale with sequence length. Equation shows that the RNN parametrizes
10.5
long-term relationships between variables efficiently, using recurrent applications
of the same function f and same parameters θ at each time step. Figure 10.8
illustrates the graphical model interpretation. Incorporating the h( )
t nodes in
the graphical model decouples the past and the future, acting as an intermediate
quantity between them. A variable y( )
i in the distant past may influence a variable
y( )
t
via its effect on h. The structure of this graph shows that the model can be
efficiently parametrized by using the same conditional probability distributions at
each time step, and that when the variables are all observed, the probability of the
joint assignment of all variables can be evaluated efficiently.
Even with the efficient parametrization of the graphical model, some operations
remain computationally challenging. For example, it is difficult to predict missing
1
The conditional distribution over these variables given their parents is deterministic. This is
perfectly legitimate, though it is somewhat rare to design a graphical model with such deterministic
hidden units.
389
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
values in the middle of the sequence.
The price recurrent networks pay for their reduced number of parameters is
that the parameters may be difficult.
optimizing
The parameter sharing used in recurrent networks relies on the assumption
that the same parameters can be used for different time steps. Equivalently, the
assumption is that the conditional probability distribution over the variables at
time t+1 given the variables at time t is stationary, meaning that the relationship
between the previous time step and the next time step does not depend on t. In
principle, it would be possible to use t as an extra input at each time step and let
the learner discover any time-dependence while sharing as much as it can between
different time steps. This would already be much better than using a different
conditional probability distribution for each t, but the network would then have to
extrapolate when faced with new values of .
t
To complete our view of an RNN as a graphical model, we must describe how
to draw samples from the model. The main operation that we need to perform is
simply to sample from the conditional distribution at each time step. However,
there is one additional complication. The RNN must have some mechanism for
determining the length of the sequence. This can be achieved in various ways.
In the case when the output is a symbol taken from a vocabulary, one can
add a special symbol corresponding to the end of a sequence (Schmidhuber 2012
, ).
When that symbol is generated, the sampling process stops. In the training set,
we insert this symbol as an extra member of the sequence, immediately after x( )
τ
in each training example.
Another option is to introduce an extra Bernoulli output to the model that
represents the decision to either continue generation or halt generation at each
time step. This approach is more general than the approach of adding an extra
symbol to the vocabulary, because it may be applied to any RNN, rather than
only RNNs that output a sequence of symbols. For example, it may be applied to
an RNN that emits a sequence of real numbers. The new output unit is usually a
sigmoid unit trained with the cross-entropy loss. In this approach the sigmoid is
trained to maximize the log-probability of the correct prediction as to whether the
sequence ends or continues at each time step.
Another way to determine the sequence length τ is to add an extra output to
the model that predicts the integer τ itself. The model can sample a value of τ
and then sample τ steps worth of data. This approach requires adding an extra
input to the recurrent update at each time step so that the recurrent update is
aware of whether it is near the end of the generated sequence. This extra input
can either consist of the value of τ or can consist of τ t
− , the number of remaining
390
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
time steps. Without this extra input, the RNN might generate sequences that
end abruptly, such as a sentence that ends before it is complete. This approach is
based on the decomposition
P(x(1)
, . . . , x( )
τ
) = ( ) (
P τ P x(1)
, . . . , x( )
τ
| τ .
) (10.34)
The strategy of predicting τ directly is used for example by Goodfellow et al.
( ).
2014d
10.2.4 Modeling Sequences Conditioned on Context with RNNs
In the previous section we described how an RNN could correspond to a directed
graphical model over a sequence of random variables y( )
t
with no inputs x. Of
course, our development of RNNs as in equation included a sequence of
10.8
inputs x(1)
, x(2)
, . . . , x( )
τ
. In general, RNNs allow the extension of the graphical
model view to represent not only a joint distribution over the y variables but
also a conditional distribution over y given x. As discussed in the context of
feedforward networks in section , any model representing a variable
6.2.1.1 P(y; θ)
can be reinterpreted as a model representing a conditional distribution P(y ω
| )
with ω = θ. We can extend such a model to represent a distribution P (y x
| ) by
using the same P(y ω
| ) as before, but making ω a function of x. In the case of
an RNN, this can be achieved in different ways. We review here the most common
and obvious choices.
Previously, we have discussed RNNs that take a sequence of vectors x( )
t for
t = 1, . . . , τ as input. Another option is to take only a single vector x as input.
When x is a fixed-size vector, we can simply make it an extra input of the RNN
that generates the y sequence. Some common ways of providing an extra input to
an RNN are:
1. as an extra input at each time step, or
2. as the initial state h(0)
, or
3. both.
The first and most common approach is illustrated in figure . The interaction
10.9
between the input x and each hidden unit vector h( )
t is parametrized by a newly
introduced weight matrix R that was absent from the model of only the sequence
of y values. The same product x
R is added as additional input to the hidden
units at every time step. We can think of the choice of x as determining the value
391
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
of x
R that is effectively a new bias parameter used for each of the hidden units.
The weights remain independent of the input. We can think of this model as taking
the parameters θ of the non-conditional model and turning them into ω, where
the bias parameters within are now a function of the input.
ω
o(t−1)
o(t−1)
o( )
t
o( )
t
o( +1)
t
o( +1)
t
L(t−1)
L(t−1)
L( )
t
L( )
t
L( +1)
t
L( +1)
t
y(t−1)
y(t−1)
y( )
t
y( )
t
y( +1)
t
y( +1)
t
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
W
W W W
s( )
...
s( )
...
h( )
...
h( )
...
V V V
U U U
x
x
y( )
...
y( )
...
R R R R R
Figure 10.9: An RNN that maps a fixed-length vectorx into a distribution over sequences
Y. This RNN is appropriate for tasks such as image captioning, where a single image is
used as input to a model that then produces a sequence of words describing the image.
Each element y( )
t
of the observed output sequence serves both as input (for the current
time step) and, during training, as target (for the previous time step).
Rather than receiving only a single vector x as input, the RNN may receive
a sequence of vectors x( )
t as input. The RNN described in equation corre-
10.8
sponds to a conditional distribution P(y(1), . . . , y( )
τ | x(1), . . . , x( )
τ ) that makes a
conditional independence assumption that this distribution factorizes as

t
P(y( )
t
| x(1)
, . . . , x( )
t
). (10.35)
To remove the conditional independence assumption, we can add connections from
the output at time t to the hidden unit at time t+ 1, as shown in figure . The
10.10
model can then represent arbitrary probability distributions over the y sequence.
This kind of model representing a distribution over a sequence given another
392
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
o(t−1)
o(t−1)
o( )
t
o( )
t
o( +1)
t
o( +1)
t
L(t−1)
L(t−1)
L( )
t
L( )
t
L( +1)
t
L( +1)
t
y(t−1)
y(t−1)
y( )
t
y( )
t
y( +1)
t
y( +1)
t
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
W
W W W
h( )
...
h( )
...
h( )
...
h( )
...
V V V
U U U
x(t−1)
x(t−1)
R
x( )
t
x( )
t
x( +1)
t
x( +1)
t
R R
Figure 10.10: A conditional recurrent neural network mapping a variable-length sequence
of x values into a distribution over sequences of y values of the same length. Compared to
figure , this RNN contains connections from the previous output to the current state.
10.3
These connections allow this RNN to model an arbitrary distribution over sequences ofy
given sequences of x of the same length. The RNN of figure is only able to represent
10.3
distributions in which the y values are conditionally independent from each other given
the values.
x
393
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
sequence still has one restriction, which is that the length of both sequences must
be the same. We describe how to remove this restriction in section .
10.4
o(t−1)
o(t−1)
o( )
t
o( )
t
o( +1)
t
o( +1)
t
L(t−1)
L(t−1)
L( )
t
L( )
t
L( +1)
t
L( +1)
t
y(t−1)
y(t−1)
y( )
t
y( )
t
y ( +1)
t
y ( +1)
t
h(t−1)
h(t−1)
h( )
t
h( )
t
h( +1)
t
h( +1)
t
x(t−1)
x(t−1) x( )
t
x( )
t
x( +1)
t
x( +1)
t
g(t−1)
g(t−1)
g( )
t
g( )
t
g( +1)
t
g( +1)
t
Figure 10.11: Computation of a typical bidirectional recurrent neural network, meant
to learn to map input sequences x to target sequences y, with loss L( )
t
at each step t.
The h recurrence propagates information forward in time (towards the right) while the
g recurrence propagates information backward in time (towards the left). Thus at each
point t, the output units o( )
t can benefit from a relevant summary of the past in itsh( )
t
input and from a relevant summary of the future in its g( )
t input.
10.3 Bidirectional RNNs
All of the recurrent networks we have considered up to now have a “causal” struc-
ture, meaning that the state at time t only captures information from the past,
x(1), . . . , x( 1)
t−
, and the present input x( )
t . Some of the models we have discussed
also allow information from past y values to affect the current state when the y
values are available.
However, in many applications we want to output a prediction of y( )
t
which may
394
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
depend on the whole input sequence. For example, in speech recognition, the correct
interpretation of the current sound as a phoneme may depend on the next few
phonemes because of co-articulation and potentially may even depend on the next
few words because of the linguistic dependencies between nearby words: if there
are two interpretations of the current word that are both acoustically plausible, we
may have to look far into the future (and the past) to disambiguate them. This is
also true of handwriting recognition and many other sequence-to-sequence learning
tasks, described in the next section.
Bidirectional recurrent neural networks (or bidirectional RNNs) were invented
to address that need (Schuster and Paliwal 1997
, ). They have been extremely suc-
cessful (Graves 2012
, ) in applications where that need arises, such as handwriting
recognition (Graves 2008 Graves and Schmidhuber 2009
et al., ; , ), speech recogni-
tion (Graves and Schmidhuber 2005 Graves 2013 Baldi
, ; et al., ) and bioinformatics (
et al., ).
1999
As the name suggests, bidirectional RNNs combine an RNN that moves forward
through time beginning from the start of the sequence with another RNN that
moves backward through time beginning from the end of the sequence. Figure 10.11
illustrates the typical bidirectional RNN, with h( )
t
standing for the state of the
sub-RNN that moves forward through time and g( )
t
standing for the state of the
sub-RNN that moves backward through time. This allows the output units o( )
t
to compute a representation that depends on both the past and the future but
is most sensitive to the input values around time t, without having to specify a
fixed-size window around t (as one would have to do with a feedforward network,
a convolutional network, or a regular RNN with a fixed-size look-ahead buffer).
This idea can be naturally extended to 2-dimensional input, such as images, by
having RNNs, each one going in one of the four directions: up, down, left,
four
right. At each point (i, j) of a 2-D grid, an output Oi,j could then compute a
representation that would capture mostly local information but could also depend
on long-range inputs, if the RNN is able to learn to carry that information.
Compared to a convolutional network, RNNs applied to images are typically more
expensive but allow for long-range lateral interactions between features in the
same feature map ( , ;
Visin et al. 2015 Kalchbrenner 2015
et al., ). Indeed, the
forward propagation equations for such RNNs may be written in a form that shows
they use a convolution that computes the bottom-up input to each layer, prior
to the recurrent propagation across the feature map that incorporates the lateral
interactions.
395
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.4 Encoder-Decoder Sequence-to-Sequence Architec-
tures
We have seen in figure how an RNN can map an input sequence to a fixed-size
10.5
vector. We have seen in figure how an RNN can map a fixed-size vector to a
10.9
sequence. We have seen in figures , , and how an RNN can
10.3 10.4 10.10 10.11
map an input sequence to an output sequence of the same length.
Encoder
…
x(1)
x(1)
x(2)
x(2)
x( )
...
x( )
...
x(nx )
x(nx )
Decoder
…
y(1)
y(1)
y(2)
y(2)
y( )
...
y( )
...
y(ny )
y(ny )
C
C
Figure 10.12: Example of an encoder-decoder or sequence-to-sequence RNN architecture,
for learning to generate an output sequence (
y(1)
,. .. , y(ny)
) given an input sequence
(x(1)
,x(2)
,. .. , x(nx )
). It is composed of an encoder RNN that reads the input sequence
and a decoder RNN that generates the output sequence (or computes the probability of a
given output sequence). The final hidden state of the encoder RNN is used to compute a
generally fixed-size context variableC which represents a semantic summary of the input
sequence and is given as input to the decoder RNN.
Here we discuss how an RNN can be trained to map an input sequence to an
output sequence which is not necessarily of the same length. This comes up in
many applications, such as speech recognition, machine translation or question
396
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
answering, where the input and output sequences in the training set are generally
not of the same length (although their lengths might be related).
We often call the input to the RNN the “context.” We want to produce a
representation of this context, C . The context C might be a vector or sequence of
vectors that summarize the input sequence X x
= ( (1), . . . , x(nx )).
The simplest RNN architecture for mapping a variable-length sequence to
another variable-length sequence was first proposed by ( ) and
Cho et al. 2014a
shortly after by Sutskever 2014
et al. ( ), who independently developed that archi-
tecture and were the first to obtain state-of-the-art translation using this approach.
The former system is based on scoring proposals generated by another machine
translation system, while the latter uses a standalone recurrent network to generate
the translations. These authors respectively called this architecture, illustrated
in figure , the encoder-decoder or sequence-to-sequence architecture. The
10.12
idea is very simple: (1) an encoder or reader or input RNN processes the input
sequence. The encoder emits the context C, usually as a simple function of its
final hidden state. (2) a decoder or writer or output RNN is conditioned on
that fixed-length vector (just like in figure ) to generate the output sequence
10.9
Y = (y(1)
, . . . , y(ny )
). The innovation of this kind of architecture over those
presented in earlier sections of this chapter is that the lengths nx and ny can
vary from each other, while previous architectures constrained nx = ny = τ. In a
sequence-to-sequence architecture, the two RNNs are trained jointly to maximize
the average of log P(y(1), . . . , y(ny) | x(1), . . . , x(nx)) over all the pairs of x and y
sequences in the training set. The last state hnx of the encoder RNN is typically
used as a representation C of the input sequence that is provided as input to the
decoder RNN.
If the context C is a vector, then the decoder RNN is simply a vector-to-
sequence RNN as described in section . As we have seen, there are at least
10.2.4
two ways for a vector-to-sequence RNN to receive input. The input can be provided
as the initial state of the RNN, or the input can be connected to the hidden units
at each time step. These two ways can also be combined.
There is no constraint that the encoder must have the same size of hidden layer
as the decoder.
One clear limitation of this architecture is when the context C output by the
encoder RNN has a dimension that is too small to properly summarize a long
sequence. This phenomenon was observed by ( ) in the context
Bahdanau et al. 2015
of machine translation. They proposed to make C a variable-length sequence rather
than a fixed-size vector. Additionally, they introduced an attention mechanism
that learns to associate elements of the sequence C to elements of the output
397
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
sequence. See section for more details.
12.4.5.1
10.5 Deep Recurrent Networks
The computation in most RNNs can be decomposed into three blocks of parameters
and associated transformations:
1. from the input to the hidden state,
2. from the previous hidden state to the next hidden state, and
3. from the hidden state to the output.
With the RNN architecture of figure , each of these three blocks is associated
10.3
with a single weight matrix. In other words, when the network is unfolded, each
of these corresponds to a shallow transformation. By a shallow transformation,
we mean a transformation that would be represented by a single layer within
a deep MLP. Typically this is a transformation represented by a learned affine
transformation followed by a fixed nonlinearity.
Would it be advantageous to introduce depth in each of these operations?
Experimental evidence (Graves 2013 Pascanu 2014a
et al., ; et al., ) strongly suggests
so. The experimental evidence is in agreement with the idea that we need enough
depth in order to perform the required mappings. See also Schmidhuber 1992
( ),
El Hihi and Bengio 1996 Jaeger 2007a
( ), or ( ) for earlier work on deep RNNs.
Graves 2013
et al. ( ) were the first to show a significant benefit of decomposing
the state of an RNN into multiple layers as in figure (left). We can think
10.13
of the lower layers in the hierarchy depicted in figure a as playing a role
10.13
in transforming the raw input into a representation that is more appropriate, at
the higher levels of the hidden state. Pascanu 2014a
et al. ( ) go a step further
and propose to have a separate MLP (possibly deep) for each of the three blocks
enumerated above, as illustrated in figure b. Considerations of representational
10.13
capacity suggest to allocate enough capacity in each of these three steps, but doing
so by adding depth may hurt learning by making optimization difficult. In general,
it is easier to optimize shallower architectures, and adding the extra depth of
figure b makes the shortest path from a variable in time step
10.13 t to a variable
in time step t + 1 become longer. For example, if an MLP with a single hidden
layer is used for the state-to-state transition, we have doubled the length of the
shortest path between variables in any two different time steps, compared with the
ordinary RNN of figure . However, as argued by
10.3 Pascanu 2014a
et al. ( ), this
398
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
h
y
x
z
(a) (b) (c)
x
h
y
x
h
y
Figure 10.13: A recurrent neural network can be made deep in many ways (Pascanu
et al., ). The hidden recurrent state can be broken down into groups organized
2014a (a)
hierarchically. Deeper computation (e.g., an MLP) can be introduced in the input-to-
(b)
hidden, hidden-to-hidden and hidden-to-output parts. This may lengthen the shortest
path linking different time steps. The path-lengthening effect can be mitigated by
(c)
introducing skip connections.
399
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
can be mitigated by introducing skip connections in the hidden-to-hidden path, as
illustrated in figure c.
10.13
10.6 Recursive Neural Networks
x(1)
x(1)
x(2)
x(2)
x(3)
x(3)
V V V
y
y
L
L
x(4)
x(4)
V
o
o
U W U W
U
W
Figure 10.14: A recursive network has a computational graph that generalizes that of the
recurrent network from a chain to a tree. A variable-size sequencex(1),x(2),. .. , x( )
t can
be mapped to a fixed-size representation (the outputo), with a fixed set of parameters
(the weight matrices U, V , W ). The figure illustrates a supervised learning case in which
some target is provided which is associated with the whole sequence.
y
Recursive neural networks2
represent yet another generalization of recurrent
networks, with a different kind of computational graph, which is structured as a
deep tree, rather than the chain-like structure of RNNs. The typical computational
graph for a recursive network is illustrated in figure . Recursive neural
10.14
2
We suggest to not abbreviate “recursive neural network” as “RNN” to avoid confusion with
“recurrent neural network.”
400
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
networks were introduced by Pollack 1990
( ) and their potential use for learning to
reason was described by ( ). Recursive networks have been successfully
Bottou 2011
applied to processing data structures as input to neural nets (Frasconi 1997
et al., ,
1998 Socher 2011a c 2013a
), in natural language processing ( et al., , , ) as well as in
computer vision ( , ).
Socher et al. 2011b
One clear advantage of recursive nets over recurrent nets is that for a sequence
of the same length τ, the depth (measured as the number of compositions of
nonlinear operations) can be drastically reduced from τ to O(log τ ), which might
help deal with long-term dependencies. An open question is how to best structure
the tree. One option is to have a tree structure which does not depend on the data,
such as a balanced binary tree. In some application domains, external methods
can suggest the appropriate tree structure. For example, when processing natural
language sentences, the tree structure for the recursive network can be fixed to
the structure of the parse tree of the sentence provided by a natural language
parser ( , , ). Ideally, one would like the learner itself to
Socher et al. 2011a 2013a
discover and infer the tree structure that is appropriate for any given input, as
suggested by ( ).
Bottou 2011
Many variants of the recursive net idea are possible. For example, Frasconi
et al. ( ) and
1997 Frasconi 1998
et al. ( ) associate the data with a tree structure,
and associate the inputs and targets with individual nodes of the tree. The
computation performed by each node does not have to be the traditional artificial
neuron computation (affine transformation of all inputs followed by a monotone
nonlinearity). For example, ( ) propose using tensor operations
Socher et al. 2013a
and bilinear forms, which have previously been found useful to model relationships
between concepts (Weston 2010 Bordes 2012
et al., ; et al., ) when the concepts are
represented by continuous vectors (embeddings).
10.7 The Challenge of Long-Term Dependencies
The mathematical challenge of learning long-term dependencies in recurrent net-
works was introduced in section . The basic problem is that gradients prop-
8.2.5
agated over many stages tend to either vanish (most of the time) or explode
(rarely, but with much damage to the optimization). Even if we assume that the
parameters are such that the recurrent network is stable (can store memories,
with gradients not exploding), the difficulty with long-term dependencies arises
from the exponentially smaller weights given to long-term interactions (involving
the multiplication of many Jacobians) compared to short-term ones. Many other
sources provide a deeper treatment ( , ;
Hochreiter 1991 Doya 1993 Bengio
, ; et al.,
401
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
− − −
60 40 20 0 20 40 60
Input coordinate
−4
−3
−2
−1
0
1
2
3
4
Projection
of
output
0
1
2
3
4
5
Figure 10.15: When composing many nonlinear functions (like the linear-
tanh layer shown
here), the result is highly nonlinear, typically with most of the values associated with a tiny
derivative, some values with a large derivative, and many alternations between increasing
and decreasing. In this plot, we plot a linear projection of a 100-dimensional hidden state
down to a single dimension, plotted on the y-axis. The x-axis is the coordinate of the
initial state along a random direction in the 100-dimensional space. We can thus view this
plot as a linear cross-section of a high-dimensional function. The plots show the function
after each time step, or equivalently, after each number of times the transition function
has been composed.
1994 Pascanu 2013
; et al., ) . In this section, we describe the problem in more
detail. The remaining sections describe approaches to overcoming the problem.
Recurrent networks involve the composition of the same function multiple
times, once per time step. These compositions can result in extremely nonlinear
behavior, as illustrated in figure .
10.15
In particular, the function composition employed by recurrent neural networks
somewhat resembles matrix multiplication. We can think of the recurrence relation
h( )
t
= W 
h( 1)
t−
(10.36)
as a very simple recurrent neural network lacking a nonlinear activation function,
and lacking inputs x. As described in section , this recurrence relation
8.2.5
essentially describes the power method. It may be simplified to
h( )
t
=

Wt
h(0)
, (10.37)
and if admits an eigendecomposition of the form
W
W Q Q
= Λ 
, (10.38)
402
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
with orthogonal , the recurrence may be simplified further to
Q
h( )
t
= Q
Λt
Qh(0)
. (10.39)
The eigenvalues are raised to the power of t causing eigenvalues with magnitude
less than one to decay to zero and eigenvalues with magnitude greater than one to
explode. Any component of h(0) that is not aligned with the largest eigenvector
will eventually be discarded.
This problem is particular to recurrent networks. In the scalar case, imagine
multiplying a weight w by itself many times. The product wt
will either vanish or
explode depending on the magnitude of w. However, if we make a non-recurrent
network that has a different weightw( )
t at each time step, the situation is different.
If the initial state is given by , then the state at time
1 t is given by

t w( )
t . Suppose
that the w( )
t values are generated randomly, independently from one another, with
zero mean and variance v. The variance of the product is O(vn). To obtain some
desired variance v∗ we may choose the individual weights with variance v = n
√
v∗.
Very deep feedforward networks with carefully chosen scaling can thus avoid the
vanishing and exploding gradient problem, as argued by ( ).
Sussillo 2014
The vanishing and exploding gradient problem for RNNs was independently
discovered by separate researchers ( , ; , , ).
Hochreiter 1991 Bengio et al. 1993 1994
One may hope that the problem can be avoided simply by staying in a region of
parameter space where the gradients do not vanish or explode. Unfortunately, in
order to store memories in a way that is robust to small perturbations, the RNN
must enter a region of parameter space where gradients vanish ( , ,
Bengio et al. 1993
1994). Specifically, whenever the model is able to represent long term dependencies,
the gradient of a long term interaction has exponentially smaller magnitude than
the gradient of a short term interaction. It does not mean that it is impossible
to learn, but that it might take a very long time to learn long-term dependencies,
because the signal about these dependencies will tend to be hidden by the smallest
fluctuations arising from short-term dependencies. In practice, the experiments
in ( ) show that as we increase the span of the dependencies that
Bengio et al. 1994
need to be captured, gradient-based optimization becomes increasingly difficult,
with the probability of successful training of a traditional RNN via SGD rapidly
reaching 0 for sequences of only length 10 or 20.
For a deeper treatment of recurrent networks as dynamical systems, see Doya
( ), ( ) and ( ), with a review
1993 Bengio et al. 1994 Siegelmann and Sontag 1995
in Pascanu 2013
et al. ( ). The remaining sections of this chapter discuss various
approaches that have been proposed to reduce the difficulty of learning long-
term dependencies (in some cases allowing an RNN to learn dependencies across
403
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
hundreds of steps), but the problem of learning long-term dependencies remains
one of the main challenges in deep learning.
10.8 Echo State Networks
The recurrent weights mapping from h( 1)
t− to h( )
t and the input weights mapping
from x( )
t to h( )
t are some of the most difficult parameters to learn in a recurrent
network. One proposed ( , ; , ; , ;
Jaeger 2003 Maass et al. 2002 Jaeger and Haas 2004
Jaeger 2007b
, ) approach to avoiding this difficulty is to set the recurrent weights
such that the recurrent hidden units do a good job of capturing the history of past
inputs, and learn only the output weights. This is the idea that was independently
proposed for echo state networks or ESNs ( , ; , )
Jaeger and Haas 2004 Jaeger 2007b
and liquid state machines ( , ). The latter is similar, except
Maass et al. 2002
that it uses spiking neurons (with binary outputs) instead of the continuous-valued
hidden units used for ESNs. Both ESNs and liquid state machines are termed
reservoir computing (Lukoševičius and Jaeger 2009
, ) to denote the fact that
the hidden units form of reservoir of temporal features which may capture different
aspects of the history of inputs.
One way to think about these reservoir computing recurrent networks is that
they are similar to kernel machines: they map an arbitrary length sequence (the
history of inputs up to time t) into a fixed-length vector (the recurrent state h( )
t
),
on which a linear predictor (typically a linear regression) can be applied to solve
the problem of interest. The training criterion may then be easily designed to be
convex as a function of the output weights. For example, if the output consists
of linear regression from the hidden units to the output targets, and the training
criterion is mean squared error, then it is convex and may be solved reliably with
simple learning algorithms ( , ).
Jaeger 2003
The important question is therefore: how do we set the input and recurrent
weights so that a rich set of histories can be represented in the recurrent neural
network state? The answer proposed in the reservoir computing literature is to
view the recurrent net as a dynamical system, and set the input and recurrent
weights such that the dynamical system is near the edge of stability.
The original idea was to make the eigenvalues of the Jacobian of the state-to-
state transition function be close to . As explained in section , an important
1 8.2.5
characteristic of a recurrent network is the eigenvalue spectrum of the Jacobians
J( )
t = ∂s( )
t
∂s( 1)
t− . Of particular importance is the spectral radius of J( )
t , defined to
be the maximum of the absolute values of its eigenvalues.
404
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
To understand the effect of the spectral radius, consider the simple case of
back-propagation with a Jacobian matrix J that does not change with t. This
case happens, for example, when the network is purely linear. Suppose that J has
an eigenvector v with corresponding eigenvalue λ. Consider what happens as we
propagate a gradient vector backwards through time. If we begin with a gradient
vector g, then after one step of back-propagation, we will have Jg, and after n
steps we will have Jn g. Now consider what happens if we instead back-propagate
a perturbed version of g. If we begin with g + δv, then after one step, we will
have J(g + δv). After n steps, we will have J n(g + δv ). From this we can see
that back-propagation starting from g and back-propagation starting from g + δv
diverge by δJ nv after n steps of back-propagation. If v is chosen to be a unit
eigenvector of J with eigenvalue λ, then multiplication by the Jacobian simply
scales the difference at each step. The two executions of back-propagation are
separated by a distance of δ λ
| |n
. When v corresponds to the largest value of | |
λ ,
this perturbation achieves the widest possible separation of an initial perturbation
of size .
δ
When | |
λ > 1, the deviation size δ λ
| |n grows exponentially large. When | |
λ < 1,
the deviation size becomes exponentially small.
Of course, this example assumed that the Jacobian was the same at every
time step, corresponding to a recurrent network with no nonlinearity. When a
nonlinearity is present, the derivative of the nonlinearity will approach zero on
many time steps, and help to prevent the explosion resulting from a large spectral
radius. Indeed, the most recent work on echo state networks advocates using a
spectral radius much larger than unity ( , ; , ).
Yildiz et al. 2012 Jaeger 2012
Everything we have said about back-propagation via repeated matrix multipli-
cation applies equally to forward propagation in a network with no nonlinearity,
where the state h( +1)
t = h( )
t 
W .
When a linear map W always shrinks h as measured by the L2 norm, then
we say that the map is contractive. When the spectral radius is less than one,
the mapping from h( )
t
to h( +1)
t
is contractive, so a small change becomes smaller
after each time step. This necessarily makes the network forget information about
the past when we use a finite level of precision (such as 32 bit integers) to store
the state vector.
The Jacobian matrix tells us how a small change of h( )
t propagates one step
forward, or equivalently, how the gradient on h( +1)
t propagates one step backward,
during back-propagation. Note that neither W nor J need to be symmetric (al-
though they are square and real), so they can have complex-valued eigenvalues and
eigenvectors, with imaginary components corresponding to potentially oscillatory
405
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
behavior (if the same Jacobian was applied iteratively). Even though h( )
t
or a
small variation of h( )
t
of interest in back-propagation are real-valued, they can
be expressed in such a complex-valued basis. What matters is what happens to
the magnitude (complex absolute value) of these possibly complex-valued basis
coefficients, when we multiply the matrix by the vector. An eigenvalue with
magnitude greater than one corresponds to magnification (exponential growth, if
applied iteratively) or shrinking (exponential decay, if applied iteratively).
With a nonlinear map, the Jacobian is free to change at each step. The
dynamics therefore become more complicated. However, it remains true that a
small initial variation can turn into a large variation after several steps. One
difference between the purely linear case and the nonlinear case is that the use of
a squashing nonlinearity such as tanh can cause the recurrent dynamics to become
bounded. Note that it is possible for back-propagation to retain unbounded
dynamics even when forward propagation has bounded dynamics, for example,
when a sequence of tanh units are all in the middle of their linear regime and are
connected by weight matrices with spectral radius greater than . However, it is
1
rare for all of the units to simultaneously lie at their linear activation point.
tanh
The strategy of echo state networks is simply to fix the weights to have some
spectral radius such as , where information is carried forward through time but
3
does not explode due to the stabilizing effect of saturating nonlinearities like tanh.
More recently, it has been shown that the techniques used to set the weights
in ESNs could be used to the weights in a fully trainable recurrent net-
initialize
work (with the hidden-to-hidden recurrent weights trained using back-propagation
through time), helping to learn long-term dependencies (Sutskever 2012 Sutskever
, ;
et al., ). In this setting, an initial spectral radius of 1.2 performs well, combined
2013
with the sparse initialization scheme described in section .
8.4
10.9 Leaky Units and Other Strategies for Multiple
Time Scales
One way to deal with long-term dependencies is to design a model that operates
at multiple time scales, so that some parts of the model operate at fine-grained
time scales and can handle small details, while other parts operate at coarse time
scales and transfer information from the distant past to the present more efficiently.
Various strategies for building both fine and coarse time scales are possible. These
include the addition of skip connections across time, “leaky units” that integrate
signals with different time constants, and the removal of some of the connections
406
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
used to model fine-grained time scales.
10.9.1 Adding Skip Connections through Time
One way to obtain coarse time scales is to add direct connections from variables in
the distant past to variables in the present. The idea of using such skip connections
dates back to ( ) and follows from the idea of incorporating delays in
Lin et al. 1996
feedforward neural networks ( , ). In an ordinary recurrent
Lang and Hinton 1988
network, a recurrent connection goes from a unit at time t to a unit at time t+ 1.
It is possible to construct recurrent networks with longer delays ( , ).
Bengio 1991
As we have seen in section , gradients may vanish or explode exponentially
8.2.5
with respect to the number of time steps. ( ) introduced recurrent
Lin et al. 1996
connections with a time-delay of d to mitigate this problem. Gradients now
diminish exponentially as a function of τ
d rather than τ. Since there are both
delayed and single step connections, gradients may still explode exponentially in τ.
This allows the learning algorithm to capture longer dependencies although not all
long-term dependencies may be represented well in this way.
10.9.2 Leaky Units and a Spectrum of Different Time Scales
Another way to obtain paths on which the product of derivatives is close to one is to
have units with linear self-connections and a weight near one on these connections.
When we accumulate a running average µ( )
t of some value v( )
t by applying the
update µ( )
t ← αµ( 1)
t−
+ (1 − α)v( )
t
the α parameter is an example of a linear self-
connection from µ( 1)
t−
to µ( )
t
. When α is near one, the running average remembers
information about the past for a long time, and when α is near zero, information
about the past is rapidly discarded. Hidden units with linear self-connections can
behave similarly to such running averages. Such hidden units are called leaky
units.
Skip connections through d time steps are a way of ensuring that a unit can
always learn to be influenced by a value from d time steps earlier. The use of a
linear self-connection with a weight near one is a different way of ensuring that the
unit can access values from the past. The linear self-connection approach allows
this effect to be adapted more smoothly and flexibly by adjusting the real-valued
α rather than by adjusting the integer-valued skip length.
These ideas were proposed by ( ) and by ( ).
Mozer 1992 El Hihi and Bengio 1996
Leaky units were also found to be useful in the context of echo state networks
( , ).
Jaeger et al. 2007
407
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
There are two basic strategies for setting the time constants used by leaky
units. One strategy is to manually fix them to values that remain constant, for
example by sampling their values from some distribution once at initialization time.
Another strategy is to make the time constants free parameters and learn them.
Having such leaky units at different time scales appears to help with long-term
dependencies ( , ;
Mozer 1992 Pascanu 2013
et al., ).
10.9.3 Removing Connections
Another approach to handle long-term dependencies is the idea of organizing
the state of the RNN at multiple time-scales ( , ), with
El Hihi and Bengio 1996
information flowing more easily through long distances at the slower time scales.
This idea differs from the skip connections through time discussed earlier
because it involves actively removing length-one connections and replacing them
with longer connections. Units modified in such a way are forced to operate on a
long time scale. Skip connections through time edges. Units receiving such
add
new connections may learn to operate on a long time scale but may also choose to
focus on their other short-term connections.
There are different ways in which a group of recurrent units can be forced to
operate at different time scales. One option is to make the recurrent units leaky,
but to have different groups of units associated with different fixed time scales.
This was the proposal in ( ) and has been successfully used in
Mozer 1992 Pascanu
et al. ( ). Another option is to have explicit and discrete updates taking place
2013
at different times, with a different frequency for different groups of units. This is
the approach of ( ) and
El Hihi and Bengio 1996 Koutnik 2014
et al. ( ). It worked
well on a number of benchmark datasets.
10.10 The Long Short-Term Memory and Other Gated
RNNs
As of this writing, the most effective sequence models used in practical applications
are called gated RNNs. These include the long short-term memory and
networks based on the .
gated recurrent unit
Like leaky units, gated RNNs are based on the idea of creating paths through
time that have derivatives that neither vanish nor explode. Leaky units did
this with connection weights that were either manually chosen constants or were
parameters. Gated RNNs generalize this to connection weights that may change
408
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
at each time step.
×
input input gate forget gate output gate
output
state
self-loop
×
+ ×
Figure 10.16: Block diagram of the LSTM recurrent network “cell.” Cells are connected
recurrently to each other, replacing the usual hidden units of ordinary recurrent networks.
An input feature is computed with a regular artificial neuron unit. Its value can be
accumulated into the state if the sigmoidal input gate allows it. The state unit has a
linear self-loop whose weight is controlled by the forget gate. The output of the cell can
be shut off by the output gate. All the gating units have a sigmoid nonlinearity, while the
input unit can have any squashing nonlinearity. The state unit can also be used as an
extra input to the gating units. The black square indicates a delay of a single time step.
Leaky units allow the network to accumulate information (such as evidence
for a particular feature or category) over a long duration. However, once that
information has been used, it might be useful for the neural network to forget the
old state. For example, if a sequence is made of sub-sequences and we want a leaky
unit to accumulate evidence inside each sub-subsequence, we need a mechanism to
forget the old state by setting it to zero. Instead of manually deciding when to
clear the state, we want the neural network to learn to decide when to do it. This
409
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
is what gated RNNs do.
10.10.1 LSTM
The clever idea of introducing self-loops to produce paths where the gradient
can flow for long durations is a core contribution of the initial long short-term
memory (LSTM) model (Hochreiter and Schmidhuber 1997
, ). A crucial addition
has been to make the weight on this self-loop conditioned on the context, rather than
fixed ( , ). By making the weight of this self-loop gated (controlled
Gers et al. 2000
by another hidden unit), the time scale of integration can be changed dynamically.
In this case, we mean that even for an LSTM with fixed parameters, the time scale
of integration can change based on the input sequence, because the time constants
are output by the model itself. The LSTM has been found extremely successful
in many applications, such as unconstrained handwriting recognition (Graves
et al., ), speech recognition (
2009 Graves 2013 Graves and Jaitly 2014
et al., ; , ),
handwriting generation (Graves 2013
, ), machine translation (Sutskever 2014
et al., ),
image captioning ( , ;
Kiros et al. 2014b Vinyals 2014b Xu 2015
et al., ; et al., ) and
parsing (Vinyals 2014a
et al., ).
The LSTM block diagram is illustrated in figure . The corresponding
10.16
forward propagation equations are given below, in the case of a shallow recurrent
network architecture. Deeper architectures have also been successfully used (Graves
et al., ;
2013 Pascanu 2014a
et al., ). Instead of a unit that simply applies an element-
wise nonlinearity to the affine transformation of inputs and recurrent units, LSTM
recurrent networks have “LSTM cells” that have an internal recurrence (a self-loop),
in addition to the outer recurrence of the RNN. Each cell has the same inputs
and outputs as an ordinary recurrent network, but has more parameters and a
system of gating units that controls the flow of information. The most important
component is the state unit s
( )
t
i that has a linear self-loop similar to the leaky
units described in the previous section. However, here, the self-loop weight (or the
associated time constant) is controlled by a forget gate unit f
( )
t
i (for time step t
and cell ), that sets this weight to a value between 0 and 1 via a sigmoid unit:
i
f
( )
t
i = σ

bf
i +

j
Uf
i,j x
( )
t
j +

j
Wf
i,j h
( 1)
t−
j

, (10.40)
where x( )
t
is the current input vector and h( )
t
is the current hidden layer vector,
containing the outputs of all the LSTM cells, and bf
,Uf
, Wf
are respectively
biases, input weights and recurrent weights for the forget gates. The LSTM cell
410
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
internal state is thus updated as follows, but with a conditional self-loop weight
f( )
t
i :
s( )
t
i = f( )
t
i s( 1)
t−
i + g( )
t
i σ

bi +

j
Ui,jx( )
t
j +

j
Wi,jh( 1)
t−
j

 , (10.41)
where b, U and W respectively denote the biases, input weights and recurrent
weights into the LSTM cell. The external input gate unit g
( )
t
i is computed
similarly to the forget gate (with a sigmoid unit to obtain a gating value between
0 and 1), but with its own parameters:
g
( )
t
i = σ

bg
i +

j
Ug
i,jx
( )
t
j +

j
Wg
i,jh
( 1)
t−
j

 . (10.42)
The output h
( )
t
i of the LSTM cell can also be shut off, via the output gate q
( )
t
i ,
which also uses a sigmoid unit for gating:
h
( )
t
i = tanh

s
( )
t
i

q
( )
t
i (10.43)
q( )
t
i = σ

bo
i +

j
Uo
i,jx( )
t
j +

j
W o
i,j h
( 1)
t−
j

 (10.44)
which has parameters bo
, U o
, W o
for its biases, input weights and recurrent
weights, respectively. Among the variants, one can choose to use the cell state s
( )
t
i
as an extra input (with its weight) into the three gates of the i-th unit, as shown
in figure . This would require three additional parameters.
10.16
LSTM networks have been shown to learn long-term dependencies more easily
than the simple recurrent architectures, first on artificial data sets designed for
testing the ability to learn long-term dependencies ( , ;
Bengio et al. 1994 Hochreiter
and Schmidhuber 1997 Hochreiter 2001
, ; et al., ), then on challenging sequence
processing tasks where state-of-the-art performance was obtained (Graves 2012
, ;
Graves 2013 Sutskever 2014
et al., ; et al., ). Variants and alternatives to the LSTM
have been studied and used and are discussed next.
10.10.2 Other Gated RNNs
Which pieces of the LSTM architecture are actually necessary? What other
successful architectures could be designed that allow the network to dynamically
control the time scale and forgetting behavior of different units?
411
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Some answers to these questions are given with the recent work on gated RNNs,
whose units are also known as gated recurrent units or GRUs ( , ;
Cho et al. 2014b
Chung 2014 2015a Jozefowicz 2015 Chrupala 2015
et al., , ; et al., ; et al., ). The main
difference with the LSTM is that a single gating unit simultaneously controls the
forgetting factor and the decision to update the state unit. The update equations
are the following:
h
( )
t
i = u
( 1)
t−
i h
( 1)
t−
i + (1 − u( 1)
t−
i )σ

bi +

j
Ui,jx
( 1)
t−
j +

j
Wi,j r
( 1)
t−
j h
( 1)
t−
j

,
(10.45)
where u stands for “update” gate and r for “reset” gate. Their value is defined as
usual:
u
( )
t
i = σ

bu
i +

j
Uu
i,jx
( )
t
j +

j
Wu
i,jh
( )
t
j

 (10.46)
and
r( )
t
i = σ

br
i +

j
U r
i,jx( )
t
j +

j
Wr
i,jh( )
t
j

 . (10.47)
The reset and updates gates can individually “ignore” parts of the state vector.
The update gates act like conditional leaky integrators that can linearly gate any
dimension, thus choosing to copy it (at one extreme of the sigmoid) or completely
ignore it (at the other extreme) by replacing it by the new “target state” value
(towards which the leaky integrator wants to converge). The reset gates control
which parts of the state get used to compute the next target state, introducing an
additional nonlinear effect in the relationship between past state and future state.
Many more variants around this theme can be designed. For example the
reset gate (or forget gate) output could be shared across multiple hidden units.
Alternately, the product of a global gate (covering a whole group of units, such as
an entire layer) and a local gate (per unit) could be used to combine global control
and local control. However, several investigations over architectural variations
of the LSTM and GRU found no variant that would clearly beat both of these
across a wide range of tasks ( , ;
Greff et al. 2015 Jozefowicz 2015 Greff
et al., ).
et al. ( ) found that a crucial ingredient is the forget gate, while
2015 Jozefowicz
et al. ( ) found that adding a bias of 1 to the LSTM forget gate, a practice
2015
advocated by ( ), makes the LSTM as strong as the best of the
Gers et al. 2000
explored architectural variants.
412
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.11 Optimization for Long-Term Dependencies
Section and section have described the vanishing and exploding gradient
8.2.5 10.7
problems that occur when optimizing RNNs over many time steps.
An interesting idea proposed by Martens and Sutskever 2011
( ) is that second
derivatives may vanish at the same time that first derivatives vanish. Second-order
optimization algorithms may roughly be understood as dividing the first derivative
by the second derivative (in higher dimension, multiplying the gradient by the
inverse Hessian). If the second derivative shrinks at a similar rate to the first
derivative, then the ratio of first and second derivatives may remain relatively
constant. Unfortunately, second-order methods have many drawbacks, including
high computational cost, the need for a large minibatch, and a tendency to be
attracted to saddle points. Martens and Sutskever 2011
( ) found promising results
using second-order methods. Later, Sutskever 2013
et al. ( ) found that simpler
methods such as Nesterov momentum with careful initialization could achieve
similar results. See Sutskever 2012
( ) for more detail. Both of these approaches
have largely been replaced by simply using SGD (even without momentum) applied
to LSTMs. This is part of a continuing theme in machine learning that it is often
much easier to design a model that is easy to optimize than it is to design a more
powerful optimization algorithm.
10.11.1 Clipping Gradients
As discussed in section , strongly nonlinear functions such as those computed
8.2.4
by a recurrent net over many time steps tend to have derivatives that can be
either very large or very small in magnitude. This is illustrated in figure and
8.3
figure , in which we see that the objective function (as a function of the
10.17
parameters) has a “landscape” in which one finds “cliffs”: wide and rather flat
regions separated by tiny regions where the objective function changes quickly,
forming a kind of cliff.
The difficulty that arises is that when the parameter gradient is very large, a
gradient descent parameter update could throw the parameters very far, into a
region where the objective function is larger, undoing much of the work that had
been done to reach the current solution. The gradient tells us the direction that
corresponds to the steepest descent within an infinitesimal region surrounding the
current parameters. Outside of this infinitesimal region, the cost function may
begin to curve back upwards. The update must be chosen to be small enough to
avoid traversing too much upward curvature. We typically use learning rates that
413
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
decay slowly enough that consecutive steps have approximately the same learning
rate. A step size that is appropriate for a relatively linear part of the landscape is
often inappropriate and causes uphill motion if we enter a more curved part of the
landscape on the next step.














Figure 10.17: Example of the effect of gradient clipping in a recurrent network with
two parameters w and b. Gradient clipping can make gradient descent perform more
reasonably in the vicinity of extremely steep cliffs. These steep cliffs commonly occur
in recurrent networks near where a recurrent network behaves approximately linearly.
The cliff is exponentially steep in the number of time steps because the weight matrix
is multiplied by itself once for each time step. (Left)Gradient descent without gradient
clipping overshoots the bottom of this small ravine, then receives a very large gradient
from the cliff face. The large gradient catastrophically propels the parameters outside the
axes of the plot. Gradient descent with gradient clipping has a more moderate
(Right)
reaction to the cliff. While it does ascend the cliff face, the step size is restricted so that
it cannot be propelled away from steep region near the solution. Figure adapted with
permission from Pascanu 2013
et al. ( ).
A simple type of solution has been in use by practitioners for many years:
clipping the gradient. There are different instances of this idea (Mikolov 2012
, ;
Pascanu 2013
et al., ). One option is to clip the parameter gradient from a minibatch
element-wise (Mikolov 2012
, ) just before the parameter update. Another is to clip
the norm || ||
g of the gradient g (Pascanu 2013
et al., ) just before the parameter
update:
if || ||
g > v (10.48)
g ←
gv
|| ||
g
(10.49)
414
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
where v is the norm threshold and g is used to update parameters. Because the
gradient of all the parameters (including different groups of parameters, such as
weights and biases) is renormalized jointly with a single scaling factor, the latter
method has the advantage that it guarantees that each step is still in the gradient
direction, but experiments suggest that both forms work similarly. Although
the parameter update has the same direction as the true gradient, with gradient
norm clipping, the parameter update vector norm is now bounded. This bounded
gradient avoids performing a detrimental step when the gradient explodes. In
fact, even simply taking a random step when the gradient magnitude is above
a threshold tends to work almost as well. If the explosion is so severe that the
gradient is numerically Inf or Nan (considered infinite or not-a-number), then
a random step of size v can be taken and will typically move away from the
numerically unstable configuration. Clipping the gradient norm per-minibatch will
not change the direction of the gradient for an individual minibatch. However,
taking the average of the norm-clipped gradient from many minibatches is not
equivalent to clipping the norm of the true gradient (the gradient formed from
using all examples). Examples that have large gradient norm, as well as examples
that appear in the same minibatch as such examples, will have their contribution
to the final direction diminished. This stands in contrast to traditional minibatch
gradient descent, where the true gradient direction is equal to the average over all
minibatch gradients. Put another way, traditional stochastic gradient descent uses
an unbiased estimate of the gradient, while gradient descent with norm clipping
introduces a heuristic bias that we know empirically to be useful. With element-
wise clipping, the direction of the update is not aligned with the true gradient
or the minibatch gradient, but it is still a descent direction. It has also been
proposed (Graves 2013
, ) to clip the back-propagated gradient (with respect to
hidden units) but no comparison has been published between these variants; we
conjecture that all these methods behave similarly.
10.11.2 Regularizing to Encourage Information Flow
Gradient clipping helps to deal with exploding gradients, but it does not help with
vanishing gradients. To address vanishing gradients and better capture long-term
dependencies, we discussed the idea of creating paths in the computational graph of
the unfolded recurrent architecture along which the product of gradients associated
with arcs is near 1. One approach to achieve this is with LSTMs and other self-
loops and gating mechanisms, described above in section . Another idea is
10.10
to regularize or constrain the parameters so as to encourage “information flow.”
In particular, we would like the gradient vector ∇h( )
t L being back-propagated to
415
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
maintain its magnitude, even if the loss function only penalizes the output at the
end of the sequence. Formally, we want
(∇h( )
t L)
∂h( )
t
∂h( 1)
t−
(10.50)
to be as large as
∇h( )
t L. (10.51)
With this objective, Pascanu 2013
et al. ( ) propose the following regularizer:
Ω =

t




| ∇
( h( )
t L) ∂h( )
t
∂h( 1)
t−


 |
||∇h( )
t L| |
− 1


2
. (10.52)
Computing the gradient of this regularizer may appear difficult, but Pascanu
et al. ( ) propose an approximation in which we consider the back-propagated
2013
vectors ∇h( )
t L as if they were constants (for the purpose of this regularizer, so
that there is no need to back-propagate through them). The experiments with
this regularizer suggest that, if combined with the norm clipping heuristic (which
handles gradient explosion), the regularizer can considerably increase the span of
the dependencies that an RNN can learn. Because it keeps the RNN dynamics
on the edge of explosive gradients, the gradient clipping is particularly important.
Without gradient clipping, gradient explosion prevents learning from succeeding.
A key weakness of this approach is that it is not as effective as the LSTM for
tasks where data is abundant, such as language modeling.
10.12 Explicit Memory
Intelligence requires knowledge and acquiring knowledge can be done via learning,
which has motivated the development of large-scale deep architectures. However,
there are different kinds of knowledge. Some knowledge can be implicit, sub-
conscious, and difficult to verbalize—such as how to walk, or how a dog looks
different from a cat. Other knowledge can be explicit, declarative, and relatively
straightforward to put into words—every day commonsense knowledge, like “a cat
is a kind of animal,” or very specific facts that you need to know to accomplish
your current goals, like “the meeting with the sales team is at 3:00 PM in room
141.”
Neural networks excel at storing implicit knowledge. However, they struggle to
memorize facts. Stochastic gradient descent requires many presentations of the
416
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Task network,
controlling the memory
Memory cells
Writing
mechanism
Reading
mechanism
Figure 10.18: A schematic of an example of a network with an explicit memory, capturing
some of the key design elements of the neural Turing machine. In this diagram we
distinguish the “representation” part of the model (the “task network,” here a recurrent
net in the bottom) from the “memory” part of the model (the set of cells), which can
store facts. The task network learns to “control” the memory, deciding where to read from
and where to write to within the memory (through the reading and writing mechanisms,
indicated by bold arrows pointing at the reading and writing addresses).
417
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
same input before it can be stored in a neural network parameters, and even then,
that input will not be stored especially precisely. Graves 2014b
et al. ( ) hypothesized
that this is because neural networks lack the equivalent of the working memory
system that allows human beings to explicitly hold and manipulate pieces of
information that are relevant to achieving some goal. Such explicit memory
components would allow our systems not only to rapidly and “intentionally” store
and retrieve specific facts but also to sequentially reason with them. The need
for neural networks that can process information in a sequence of steps, changing
the way the input is fed into the network at each step, has long been recognized
as important for the ability to reason rather than to make automatic, intuitive
responses to the input ( , ).
Hinton 1990
To resolve this difficulty, Weston 2014
et al. ( ) introduced memory networks
that include a set of memory cells that can be accessed via an addressing mecha-
nism. Memory networks originally required a supervision signal instructing them
how to use their memory cells. Graves 2014b
et al. ( ) introduced the neural
Turing machine, which is able to learn to read from and write arbitrary content
to memory cells without explicit supervision about which actions to undertake,
and allowed end-to-end training without this supervision signal, via the use of
a content-based soft attention mechanism (see ( ) and sec-
Bahdanau et al. 2015
tion ). This soft addressing mechanism has become standard with other
12.4.5.1
related architectures emulating algorithmic mechanisms in a way that still allows
gradient-based optimization ( , ;
Sukhbaatar et al. 2015 Joulin and Mikolov 2015
, ;
Kumar 2015 Vinyals 2015a Grefenstette 2015
et al., ; et al., ; et al., ).
Each memory cell can be thought of as an extension of the memory cells in
LSTMs and GRUs. The difference is that the network outputs an internal state
that chooses which cell to read from or write to, just as memory accesses in a
digital computer read from or write to a specific address.
It is difficult to optimize functions that produce exact, integer addresses. To
alleviate this problem, NTMs actually read to or write from many memory cells
simultaneously. To read, they take a weighted average of many cells. To write, they
modify multiple cells by different amounts. The coefficients for these operations
are chosen to be focused on a small number of cells, for example, by producing
them via a softmax function. Using these weights with non-zero derivatives allows
the functions controlling access to the memory to be optimized using gradient
descent. The gradient on these coefficients indicates whether each of them should
be increased or decreased, but the gradient will typically be large only for those
memory addresses receiving a large coefficient.
These memory cells are typically augmented to contain a vector, rather than
418
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
the single scalar stored by an LSTM or GRU memory cell. There are two reasons
to increase the size of the memory cell. One reason is that we have increased the
cost of accessing a memory cell. We pay the computational cost of producing a
coefficient for many cells, but we expect these coefficients to cluster around a small
number of cells. By reading a vector value, rather than a scalar value, we can
offset some of this cost. Another reason to use vector-valued memory cells is that
they allow for content-based addressing, where the weight used to read to or
write from a cell is a function of that cell. Vector-valued cells allow us to retrieve a
complete vector-valued memory if we are able to produce a pattern that matches
some but not all of its elements. This is analogous to the way that people can
recall the lyrics of a song based on a few words. We can think of a content-based
read instruction as saying, “Retrieve the lyrics of the song that has the chorus ‘We
all live in a yellow submarine.’ ” Content-based addressing is more useful when we
make the objects to be retrieved large—if every letter of the song was stored in a
separate memory cell, we would not be able to find them this way. By comparison,
location-based addressing is not allowed to refer to the content of the memory.
We can think of a location-based read instruction as saying “Retrieve the lyrics of
the song in slot 347.” Location-based addressing can often be a perfectly sensible
mechanism even when the memory cells are small.
If the content of a memory cell is copied (not forgotten) at most time steps, then
the information it contains can be propagated forward in time and the gradients
propagated backward in time without either vanishing or exploding.
The explicit memory approach is illustrated in figure , where we see that
10.18
a “task neural network” is coupled with a memory. Although that task neural
network could be feedforward or recurrent, the overall system is a recurrent network.
The task network can choose to read from or write to specific memory addresses.
Explicit memory seems to allow models to learn tasks that ordinary RNNs or LSTM
RNNs cannot learn. One reason for this advantage may be because information and
gradients can be propagated (forward in time or backwards in time, respectively)
for very long durations.
As an alternative to back-propagation through weighted averages of memory
cells, we can interpret the memory addressing coefficients as probabilities and
stochastically read just one cell (Zaremba and Sutskever 2015
, ). Optimizing models
that make discrete decisions requires specialized optimization algorithms, described
in section . So far, training these stochastic architectures that make discrete
20.9.1
decisions remains harder than training deterministic algorithms that make soft
decisions.
Whether it is soft (allowing back-propagation) or stochastic and hard, the
419
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
mechanism for choosing an address is in its form identical to the attention
mechanism which had been previously introduced in the context of machine
translation ( , ) and discussed in section . The idea
Bahdanau et al. 2015 12.4.5.1
of attention mechanisms for neural networks was introduced even earlier, in the
context of handwriting generation (Graves 2013
, ), with an attention mechanism
that was constrained to move only forward in time through the sequence. In
the case of machine translation and memory networks, at each step, the focus of
attention can move to a completely different place, compared to the previous step.
Recurrent neural networks provide a way to extend deep learning to sequential
data. They are the last major tool in our deep learning toolbox. Our discussion now
moves to how to choose and use these tools and how to apply them to real-world
tasks.
420
Chapter 11
Practical Methodology
Successfully applying deep learning techniques requires more than just a good
knowledge of what algorithms exist and the principles that explain how they
work. A good machine learning practitioner also needs to know how to choose an
algorithm for a particular application and how to monitor and respond to feedback
obtained from experiments in order to improve a machine learning system. During
day to day development of machine learning systems, practitioners need to decide
whether to gather more data, increase or decrease model capacity, add or remove
regularizing features, improve the optimization of a model, improve approximate
inference in a model, or debug the software implementation of the model. All of
these operations are at the very least time-consuming to try out, so it is important
to be able to determine the right course of action rather than blindly guessing.
Most of this book is about different machine learning models, training algo-
rithms, and objective functions. This may give the impression that the most
important ingredient to being a machine learning expert is knowing a wide variety
of machine learning techniques and being good at different kinds of math. In prac-
tice, one can usually do much better with a correct application of a commonplace
algorithm than by sloppily applying an obscure algorithm. Correct application of
an algorithm depends on mastering some fairly simple methodology. Many of the
recommendations in this chapter are adapted from ( ).
Ng 2015
We recommend the following practical design process:
• Determine your goals—what error metric to use, and your target value for
this error metric. These goals and error metrics should be driven by the
problem that the application is intended to solve.
• Establish a working end-to-end pipeline as soon as possible, including the
421
CHAPTER 11. PRACTICAL METHODOLOGY
estimation of the appropriate performance metrics.
• Instrument the system well to determine bottlenecks in performance. Diag-
nose which components are performing worse than expected and whether it
is due to overfitting, underfitting, or a defect in the data or software.
• Repeatedly make incremental changes such as gathering new data, adjusting
hyperparameters, or changing algorithms, based on specific findings from
your instrumentation.
As a running example, we will use Street View address number transcription
system ( , ). The purpose of this application is to add
Goodfellow et al. 2014d
buildings to Google Maps. Street View cars photograph the buildings and record
the GPS coordinates associated with each photograph. A convolutional network
recognizes the address number in each photograph, allowing the Google Maps
database to add that address in the correct location. The story of how this
commercial application was developed gives an example of how to follow the design
methodology we advocate.
We now describe each of the steps in this process.
11.1 Performance Metrics
Determining your goals, in terms of which error metric to use, is a necessary first
step because your error metric will guide all of your future actions. You should
also have an idea of what level of performance you desire.
Keep in mind that for most applications, it is impossible to achieve absolute
zero error. The Bayes error defines the minimum error rate that you can hope to
achieve, even if you have infinite training data and can recover the true probability
distribution. This is because your input features may not contain complete
information about the output variable, or because the system might be intrinsically
stochastic. You will also be limited by having a finite amount of training data.
The amount of training data can be limited for a variety of reasons. When your
goal is to build the best possible real-world product or service, you can typically
collect more data but must determine the value of reducing error further and weigh
this against the cost of collecting more data. Data collection can require time,
money, or human suffering (for example, if your data collection process involves
performing invasive medical tests). When your goal is to answer a scientific question
about which algorithm performs better on a fixed benchmark, the benchmark
422
CHAPTER 11. PRACTICAL METHODOLOGY
specification usually determines the training set and you are not allowed to collect
more data.
How can one determine a reasonable level of performance to expect? Typically,
in the academic setting, we have some estimate of the error rate that is attainable
based on previously published benchmark results. In the real-word setting, we
have some idea of the error rate that is necessary for an application to be safe,
cost-effective, or appealing to consumers. Once you have determined your realistic
desired error rate, your design decisions will be guided by reaching this error rate.
Another important consideration besides the target value of the performance
metric is the choice of which metric to use. Several different performance metrics
may be used to measure the effectiveness of a complete application that includes
machine learning components. These performance metrics are usually different
from the cost function used to train the model. As described in section , it is
5.1.2
common to measure the accuracy, or equivalently, the error rate, of a system.
However, many applications require more advanced metrics.
Sometimes it is much more costly to make one kind of a mistake than another.
For example, an e-mail spam detection system can make two kinds of mistakes:
incorrectly classifying a legitimate message as spam, and incorrectly allowing a
spam message to appear in the inbox. It is much worse to block a legitimate
message than to allow a questionable message to pass through. Rather than
measuring the error rate of a spam classifier, we may wish to measure some form
of total cost, where the cost of blocking legitimate messages is higher than the cost
of allowing spam messages.
Sometimes we wish to train a binary classifier that is intended to detect some
rare event. For example, we might design a medical test for a rare disease. Suppose
that only one in every million people has this disease. We can easily achieve
99.9999% accuracy on the detection task, by simply hard-coding the classifier
to always report that the disease is absent. Clearly, accuracy is a poor way to
characterize the performance of such a system. One way to solve this problem is
to instead measure precision and recall. Precision is the fraction of detections
reported by the model that were correct, while recall is the fraction of true events
that were detected. A detector that says no one has the disease would achieve
perfect precision, but zero recall. A detector that says everyone has the disease
would achieve perfect recall, but precision equal to the percentage of people who
have the disease (0.0001% in our example of a disease that only one people in a
million have). When using precision and recall, it is common to plot a PR curve,
with precision on the y-axis and recall on the x-axis. The classifier generates a score
that is higher if the event to be detected occurred. For example, a feedforward
423
CHAPTER 11. PRACTICAL METHODOLOGY
network designed to detect a disease outputs ŷ = P (y = 1 | x), estimating the
probability that a person whose medical results are described by features x has
the disease. We choose to report a detection whenever this score exceeds some
threshold. By varying the threshold, we can trade precision for recall. In many
cases, we wish to summarize the performance of the classifier with a single number
rather than a curve. To do so, we can convert precision p and recall r into an
F-score given by
F =
2pr
p r
+
. (11.1)
Another option is to report the total area lying beneath the PR curve.
In some applications, it is possible for the machine learning system to refuse to
make a decision. This is useful when the machine learning algorithm can estimate
how confident it should be about a decision, especially if a wrong decision can
be harmful and if a human operator is able to occasionally take over. The Street
View transcription system provides an example of this situation. The task is to
transcribe the address number from a photograph in order to associate the location
where the photo was taken with the correct address in a map. Because the value
of the map degrades considerably if the map is inaccurate, it is important to add
an address only if the transcription is correct. If the machine learning system
thinks that it is less likely than a human being to obtain the correct transcription,
then the best course of action is to allow a human to transcribe the photo instead.
Of course, the machine learning system is only useful if it is able to dramatically
reduce the amount of photos that the human operators must process. A natural
performance metric to use in this situation is coverage. Coverage is the fraction
of examples for which the machine learning system is able to produce a response.
It is possible to trade coverage for accuracy. One can always obtain 100% accuracy
by refusing to process any example, but this reduces the coverage to 0%. For the
Street View task, the goal for the project was to reach human-level transcription
accuracy while maintaining 95% coverage. Human-level performance on this task
is 98% accuracy.
Many other metrics are possible. We can for example, measure click-through
rates, collect user satisfaction surveys, and so on. Many specialized application
areas have application-specific criteria as well.
What is important is to determine which performance metric to improve ahead
of time, then concentrate on improving this metric. Without clearly defined goals,
it can be difficult to tell whether changes to a machine learning system make
progress or not.
424
CHAPTER 11. PRACTICAL METHODOLOGY
11.2 Default Baseline Models
After choosing performance metrics and goals, the next step in any practical
application is to establish a reasonable end-to-end system as soon as possible. In
this section, we provide recommendations for which algorithms to use as the first
baseline approach in various situations. Keep in mind that deep learning research
progresses quickly, so better default algorithms are likely to become available soon
after this writing.
Depending on the complexity of your problem, you may even want to begin
without using deep learning. If your problem has a chance of being solved by
just choosing a few linear weights correctly, you may want to begin with a simple
statistical model like logistic regression.
If you know that your problem falls into an “AI-complete” category like object
recognition, speech recognition, machine translation, and so on, then you are likely
to do well by beginning with an appropriate deep learning model.
First, choose the general category of model based on the structure of your
data. If you want to perform supervised learning with fixed-size vectors as input,
use a feedforward network with fully connected layers. If the input has known
topological structure (for example, if the input is an image), use a convolutional
network. In these cases, you should begin by using some kind of piecewise linear
unit (ReLUs or their generalizations like Leaky ReLUs, PreLus and maxout). If
your input or output is a sequence, use a gated recurrent net (LSTM or GRU).
A reasonable choice of optimization algorithm is SGD with momentum with a
decaying learning rate (popular decay schemes that perform better or worse on
different problems include decaying linearly until reaching a fixed minimum learning
rate, decaying exponentially, or decreasing the learning rate by a factor of 2-10
each time validation error plateaus). Another very reasonable alternative is Adam.
Batch normalization can have a dramatic effect on optimization performance,
especially for convolutional networks and networks with sigmoidal nonlinearities.
While it is reasonable to omit batch normalization from the very first baseline, it
should be introduced quickly if optimization appears to be problematic.
Unless your training set contains tens of millions of examples or more, you
should include some mild forms of regularization from the start. Early stopping
should be used almost universally. Dropout is an excellent regularizer that is easy
to implement and compatible with many models and training algorithms. Batch
normalization also sometimes reduces generalization error and allows dropout to
be omitted, due to the noise in the estimate of the statistics used to normalize
each variable.
425
CHAPTER 11. PRACTICAL METHODOLOGY
If your task is similar to another task that has been studied extensively, you
will probably do well by first copying the model and algorithm that is already
known to perform best on the previously studied task. You may even want to copy
a trained model from that task. For example, it is common to use the features
from a convolutional network trained on ImageNet to solve other computer vision
tasks ( , ).
Girshick et al. 2015
A common question is whether to begin by using unsupervised learning, de-
scribed further in part . This is somewhat domain specific. Some domains, such
III
as natural language processing, are known to benefit tremendously from unsuper-
vised learning techniques such as learning unsupervised word embeddings. In other
domains, such as computer vision, current unsupervised learning techniques do
not bring a benefit, except in the semi-supervised setting, when the number of
labeled examples is very small ( , ;
Kingma et al. 2014 Rasmus 2015
et al., ). If your
application is in a context where unsupervised learning is known to be important,
then include it in your first end-to-end baseline. Otherwise, only use unsupervised
learning in your first attempt if the task you want to solve is unsupervised. You
can always try adding unsupervised learning later if you observe that your initial
baseline overfits.
11.3 Determining Whether to Gather More Data
After the first end-to-end system is established, it is time to measure the perfor-
mance of the algorithm and determine how to improve it. Many machine learning
novices are tempted to make improvements by trying out many different algorithms.
However, it is often much better to gather more data than to improve the learning
algorithm.
How does one decide whether to gather more data? First, determine whether
the performance on the training set is acceptable. If performance on the training
set is poor, the learning algorithm is not using the training data that is already
available, so there is no reason to gather more data. Instead, try increasing the
size of the model by adding more layers or adding more hidden units to each layer.
Also, try improving the learning algorithm, for example by tuning the learning
rate hyperparameter. If large models and carefully tuned optimization algorithms
do not work well, then the problem might be the of the training data. The
quality
data may be too noisy or may not include the right inputs needed to predict the
desired outputs. This suggests starting over, collecting cleaner data or collecting a
richer set of features.
If the performance on the training set is acceptable, then measure the per-
426
CHAPTER 11. PRACTICAL METHODOLOGY
formance on a test set. If the performance on the test set is also acceptable,
then there is nothing left to be done. If test set performance is much worse than
training set performance, then gathering more data is one of the most effective
solutions. The key considerations are the cost and feasibility of gathering more
data, the cost and feasibility of reducing the test error by other means, and the
amount of data that is expected to be necessary to improve test set performance
significantly. At large internet companies with millions or billions of users, it is
feasible to gather large datasets, and the expense of doing so can be considerably
less than the other alternatives, so the answer is almost always to gather more
training data. For example, the development of large labeled datasets was one of
the most important factors in solving object recognition. In other contexts, such as
medical applications, it may be costly or infeasible to gather more data. A simple
alternative to gathering more data is to reduce the size of the model or improve
regularization, by adjusting hyperparameters such as weight decay coefficients,
or by adding regularization strategies such as dropout. If you find that the gap
between train and test performance is still unacceptable even after tuning the
regularization hyperparameters, then gathering more data is advisable.
When deciding whether to gather more data, it is also necessary to decide
how much to gather. It is helpful to plot curves showing the relationship between
training set size and generalization error, like in figure . By extrapolating such
5.4
curves, one can predict how much additional training data would be needed to
achieve a certain level of performance. Usually, adding a small fraction of the total
number of examples will not have a noticeable impact on generalization error. It is
therefore recommended to experiment with training set sizes on a logarithmic scale,
for example doubling the number of examples between consecutive experiments.
If gathering much more data is not feasible, the only other way to improve
generalization error is to improve the learning algorithm itself. This becomes the
domain of research and not the domain of advice for applied practitioners.
11.4 Selecting Hyperparameters
Most deep learning algorithms come with many hyperparameters that control many
aspects of the algorithm’s behavior. Some of these hyperparameters affect the time
and memory cost of running the algorithm. Some of these hyperparameters affect
the quality of the model recovered by the training process and its ability to infer
correct results when deployed on new inputs.
There are two basic approaches to choosing these hyperparameters: choosing
them manually and choosing them automatically. Choosing the hyperparameters
427
CHAPTER 11. PRACTICAL METHODOLOGY
manually requires understanding what the hyperparameters do and how machine
learning models achieve good generalization. Automatic hyperparameter selection
algorithms greatly reduce the need to understand these ideas, but they are often
much more computationally costly.
11.4.1 Manual Hyperparameter Tuning
To set hyperparameters manually, one must understand the relationship between
hyperparameters, training error, generalization error and computational resources
(memory and runtime). This means establishing a solid foundation on the fun-
damental ideas concerning the effective capacity of a learning algorithm from
chapter .
5
The goal of manual hyperparameter search is usually to find the lowest general-
ization error subject to some runtime and memory budget. We do not discuss how
to determine the runtime and memory impact of various hyperparameters here
because this is highly platform-dependent.
The primary goal of manual hyperparameter search is to adjust the effective
capacity of the model to match the complexity of the task. Effective capacity
is constrained by three factors: the representational capacity of the model, the
ability of the learning algorithm to successfully minimize the cost function used to
train the model, and the degree to which the cost function and training procedure
regularize the model. A model with more layers and more hidden units per layer has
higher representational capacity—it is capable of representing more complicated
functions. It can not necessarily actually learn all of these functions though, if
the training algorithm cannot discover that certain functions do a good job of
minimizing the training cost, or if regularization terms such as weight decay forbid
some of these functions.
The generalization error typically follows a U-shaped curve when plotted as
a function of one of the hyperparameters, as in figure . At one extreme, the
5.3
hyperparameter value corresponds to low capacity, and generalization error is high
because training error is high. This is the underfitting regime. At the other extreme,
the hyperparameter value corresponds to high capacity, and the generalization
error is high because the gap between training and test error is high. Somewhere
in the middle lies the optimal model capacity, which achieves the lowest possible
generalization error, by adding a medium generalization gap to a medium amount
of training error.
For some hyperparameters, overfitting occurs when the value of the hyper-
parameter is large. The number of hidden units in a layer is one such example,
428
CHAPTER 11. PRACTICAL METHODOLOGY
because increasing the number of hidden units increases the capacity of the model.
For some hyperparameters, overfitting occurs when the value of the hyperparame-
ter is small. For example, the smallest allowable weight decay coefficient of zero
corresponds to the greatest effective capacity of the learning algorithm.
Not every hyperparameter will be able to explore the entire U-shaped curve.
Many hyperparameters are discrete, such as the number of units in a layer or the
number of linear pieces in a maxout unit, so it is only possible to visit a few points
along the curve. Some hyperparameters are binary. Usually these hyperparameters
are switches that specify whether or not to use some optional component of
the learning algorithm, such as a preprocessing step that normalizes the input
features by subtracting their mean and dividing by their standard deviation. These
hyperparameters can only explore two points on the curve. Other hyperparameters
have some minimum or maximum value that prevents them from exploring some
part of the curve. For example, the minimum weight decay coefficient is zero. This
means that if the model is underfitting when weight decay is zero, we can not enter
the overfitting region by modifying the weight decay coefficient. In other words,
some hyperparameters can only subtract capacity.
The learning rate is perhaps the most important hyperparameter. If you
have time to tune only one hyperparameter, tune the learning rate. It con-
trols the effective capacity of the model in a more complicated way than other
hyperparameters—the effective capacity of the model is highest when the learning
rate is correct for the optimization problem, not when the learning rate is especially
large or especially small. The learning rate has a U-shaped curve for training error,
illustrated in figure . When the learning rate is too large, gradient descent
11.1
can inadvertently increase rather than decrease the training error. In the idealized
quadratic case, this occurs if the learning rate is at least twice as large as its
optimal value ( , ). When the learning rate is too small, training
LeCun et al. 1998a
is not only slower, but may become permanently stuck with a high training error.
This effect is poorly understood (it would not happen for a convex loss function).
Tuning the parameters other than the learning rate requires monitoring both
training and test error to diagnose whether your model is overfitting or underfitting,
then adjusting its capacity appropriately.
If your error on the training set is higher than your target error rate, you have
no choice but to increase capacity. If you are not using regularization and you are
confident that your optimization algorithm is performing correctly, then you must
add more layers to your network or add more hidden units. Unfortunately, this
increases the computational costs associated with the model.
If your error on the test set is higher than than your target error rate, you can
429
CHAPTER 11. PRACTICAL METHODOLOGY
10−2 10−1 100
Learning rate (logarithmic scale)
0
1
2
3
4
5
6
7
8
Training
error
Figure 11.1: Typical relationship between the learning rate and the training error. Notice
the sharp rise in error when the learning is above an optimal value. This is for a fixed
training time, as a smaller learning rate may sometimes only slow down training by a
factor proportional to the learning rate reduction. Generalization error can follow this
curve or be complicated by regularization effects arising out of having a too large or
too small learning rates, since poor optimization can, to some degree, reduce or prevent
overfitting, and even points with equivalent training error can have different generalization
error.
now take two kinds of actions. The test error is the sum of the training error and
the gap between training and test error. The optimal test error is found by trading
off these quantities. Neural networks typically perform best when the training
error is very low (and thus, when capacity is high) and the test error is primarily
driven by the gap between train and test error. Your goal is to reduce this gap
without increasing training error faster than the gap decreases. To reduce the gap,
change regularization hyperparameters to reduce effective model capacity, such as
by adding dropout or weight decay. Usually the best performance comes from a
large model that is regularized well, for example by using dropout.
Most hyperparameters can be set by reasoning about whether they increase or
decrease model capacity. Some examples are included in Table .
11.1
While manually tuning hyperparameters, do not lose sight of your end goal:
good performance on the test set. Adding regularization is only one way to achieve
this goal. As long as you have low training error, you can always reduce general-
ization error by collecting more training data. The brute force way to practically
guarantee success is to continually increase model capacity and training set size
until the task is solved. This approach does of course increase the computational
cost of training and inference, so it is only feasible given appropriate resources. In
430
CHAPTER 11. PRACTICAL METHODOLOGY
Hyperparameter Increases
capacity
when. . .
Reason Caveats
Number of hid-
den units
increased Increasing the number of
hidden units increases the
representational capacity
of the model.
Increasing the number
of hidden units increases
both the time and memory
cost of essentially every op-
eration on the model.
Learning rate tuned op-
timally
An improper learning rate,
whether too high or too
low, results in a model
with low effective capacity
due to optimization failure
Convolution ker-
nel width
increased Increasing the kernel width
increases the number of pa-
rameters in the model
A wider kernel results in
a narrower output dimen-
sion, reducing model ca-
pacity unless you use im-
plicit zero padding to re-
duce this effect. Wider
kernels require more mem-
ory for parameter storage
and increase runtime, but
a narrower output reduces
memory cost.
Implicit zero
padding
increased Adding implicit zeros be-
fore convolution keeps the
representation size large
Increased time and mem-
ory cost of most opera-
tions.
Weight decay co-
efficient
decreased Decreasing the weight de-
cay coefficient frees the
model parameters to be-
come larger
Dropout rate decreased Dropping units less often
gives the units more oppor-
tunities to “conspire” with
each other to fit the train-
ing set
Table 11.1: The effect of various hyperparameters on model capacity.
431
CHAPTER 11. PRACTICAL METHODOLOGY
principle, this approach could fail due to optimization difficulties, but for many
problems optimization does not seem to be a significant barrier, provided that the
model is chosen appropriately.
11.4.2 Automatic Hyperparameter Optimization Algorithms
The ideal learning algorithm just takes a dataset and outputs a function, without
requiring hand-tuning of hyperparameters. The popularity of several learning
algorithms such as logistic regression and SVMs stems in part from their ability to
perform well with only one or two tuned hyperparameters. Neural networks can
sometimes perform well with only a small number of tuned hyperparameters, but
often benefit significantly from tuning of forty or more hyperparameters. Manual
hyperparameter tuning can work very well when the user has a good starting point,
such as one determined by others having worked on the same type of application
and architecture, or when the user has months or years of experience in exploring
hyperparameter values for neural networks applied to similar tasks. However,
for many applications, these starting points are not available. In these cases,
automated algorithms can find useful values of the hyperparameters.
If we think about the way in which the user of a learning algorithm searches for
good values of the hyperparameters, we realize that an optimization is taking place:
we are trying to find a value of the hyperparameters that optimizes an objective
function, such as validation error, sometimes under constraints (such as a budget
for training time, memory or recognition time). It is therefore possible, in principle,
to develop hyperparameter optimization algorithms that wrap a learning
algorithm and choose its hyperparameters, thus hiding the hyperparameters of the
learning algorithm from the user. Unfortunately, hyperparameter optimization
algorithms often have their own hyperparameters, such as the range of values that
should be explored for each of the learning algorithm’s hyperparameters. However,
these secondary hyperparameters are usually easier to choose, in the sense that
acceptable performance may be achieved on a wide range of tasks using the same
secondary hyperparameters for all tasks.
11.4.3 Grid Search
When there are three or fewer hyperparameters, the common practice is to perform
grid search. For each hyperparameter, the user selects a small finite set of
values to explore. The grid search algorithm then trains a model for every joint
specification of hyperparameter values in the Cartesian product of the set of values
for each individual hyperparameter. The experiment that yields the best validation
432
CHAPTER 11. PRACTICAL METHODOLOGY
Grid Random
Figure 11.2: Comparison of grid search and random search. For illustration purposes we
display two hyperparameters but we are typically interested in having many more. (Left)To
perform grid search, we provide a set of values for each hyperparameter. The search
algorithm runs training for every joint hyperparameter setting in the cross product of these
sets. To perform random search, we provide a probability distribution over joint
(Right)
hyperparameter configurations. Usually most of these hyperparameters are independent
from each other. Common choices for the distribution over a single hyperparameter include
uniform and log-uniform (to sample from a log-uniform distribution, take theexp of a
sample from a uniform distribution). The search algorithm then randomly samples joint
hyperparameter configurations and runs training with each of them. Both grid search
and random search evaluate the validation set error and return the best configuration.
The figure illustrates the typical case where only some hyperparameters have a significant
influence on the result. In this illustration, only the hyperparameter on the horizontal axis
has a significant effect. Grid search wastes an amount of computation that is exponential
in the number of non-influential hyperparameters, while random search tests a unique
value of every influential hyperparameter on nearly every trial. Figure reproduced with
permission from ( ).
Bergstra and Bengio 2012
433
CHAPTER 11. PRACTICAL METHODOLOGY
set error is then chosen as having found the best hyperparameters. See the left of
figure for an illustration of a grid of hyperparameter values.
11.2
How should the lists of values to search over be chosen? In the case of numerical
(ordered) hyperparameters, the smallest and largest element of each list is chosen
conservatively, based on prior experience with similar experiments, to make sure
that the optimal value is very likely to be in the selected range. Typically, a grid
search involves picking values approximately on a logarithmic scale, e.g., a learning
rate taken within the set {.1, .01, 10−3
,10−4
,10−5}, or a number of hidden units
taken with the set .
{ }
50 100 200 500 1000 2000
, , , , ,
Grid search usually performs best when it is performed repeatedly. For example,
suppose that we ran a grid search over a hyperparameter α using values of {−1, 0,1}.
If the best value found is , then we underestimated the range in which the best
1 α
lies and we should shift the grid and run another search with α in, for example,
{1, 2, 3}. If we find that the best value of α is , then we may wish to refine our
0
estimate by zooming in and running a grid search over .
{− }
. , , .
1 0 1
The obvious problem with grid search is that its computational cost grows
exponentially with the number of hyperparameters. If there are m hyperparameters,
each taking at most n values, then the number of training and evaluation trials
required grows as O(nm). The trials may be run in parallel and exploit loose
parallelism (with almost no need for communication between different machines
carrying out the search) Unfortunately, due to the exponential cost of grid search,
even parallelization may not provide a satisfactory size of search.
11.4.4 Random Search
Fortunately, there is an alternative to grid search that is as simple to program, more
convenient to use, and converges much faster to good values of the hyperparameters:
random search ( , ).
Bergstra and Bengio 2012
A random search proceeds as follows. First we define a marginal distribution
for each hyperparameter, e.g., a Bernoulli or multinoulli for binary or discrete
hyperparameters, or a uniform distribution on a log-scale for positive real-valued
hyperparameters. For example,
log learning rate
_ _ ∼ − −
u( 1, 5) (11.2)
learning rate
_ = 10log learning rate
_ _
. (11.3)
where u(a, b) indicates a sample of the uniform distribution in the interval (a, b).
Similarly the log number of hidden units
_ _ _ _ may be sampled from u(log(50),
log(2000)).
434
CHAPTER 11. PRACTICAL METHODOLOGY
Unlike in the case of a grid search, one should not discretize or bin the values
of the hyperparameters. This allows one to explore a larger set of values, and does
not incur additional computational cost. In fact, as illustrated in figure , a
11.2
random search can be exponentially more efficient than a grid search, when there
are several hyperparameters that do not strongly affect the performance measure.
This is studied at length in ( ), who found that random
Bergstra and Bengio 2012
search reduces the validation set error much faster than grid search, in terms of
the number of trials run by each method.
As with grid search, one may often want to run repeated versions of random
search, to refine the search based on the results of the first run.
The main reason why random search finds good solutions faster than grid search
is that there are no wasted experimental runs, unlike in the case of grid search,
when two values of a hyperparameter (given values of the other hyperparameters)
would give the same result. In the case of grid search, the other hyperparameters
would have the same values for these two runs, whereas with random search, they
would usually have different values. Hence if the change between these two values
does not marginally make much difference in terms of validation set error, grid
search will unnecessarily repeat two equivalent experiments while random search
will still give two independent explorations of the other hyperparameters.
11.4.5 Model-Based Hyperparameter Optimization
The search for good hyperparameters can be cast as an optimization problem.
The decision variables are the hyperparameters. The cost to be optimized is the
validation set error that results from training using these hyperparameters. In
simplified settings where it is feasible to compute the gradient of some differentiable
error measure on the validation set with respect to the hyperparameters, we can
simply follow this gradient ( , ; , ; ,
Bengio et al. 1999 Bengio 2000 Maclaurin et al.
2015). Unfortunately, in most practical settings, this gradient is unavailable, either
due to its high computation and memory cost, or due to hyperparameters having
intrinsically non-differentiable interactions with the validation set error, as in the
case of discrete-valued hyperparameters.
To compensate for this lack of a gradient, we can build a model of the validation
set error, then propose new hyperparameter guesses by performing optimization
within this model. Most model-based algorithms for hyperparameter search use a
Bayesian regression model to estimate both the expected value of the validation set
error for each hyperparameter and the uncertainty around this expectation. Opti-
mization thus involves a tradeoff between exploration (proposing hyperparameters
435
CHAPTER 11. PRACTICAL METHODOLOGY
for which there is high uncertainty, which may lead to a large improvement but may
also perform poorly) and exploitation (proposing hyperparameters which the model
is confident will perform as well as any hyperparameters it has seen so far—usually
hyperparameters that are very similar to ones it has seen before). Contemporary
approaches to hyperparameter optimization include Spearmint ( , ),
Snoek et al. 2012
TPE ( , ) and SMAC ( , ).
Bergstra et al. 2011 Hutter et al. 2011
Currently, we cannot unambiguously recommend Bayesian hyperparameter
optimization as an established tool for achieving better deep learning results or
for obtaining those results with less effort. Bayesian hyperparameter optimization
sometimes performs comparably to human experts, sometimes better, but fails
catastrophically on other problems. It may be worth trying to see if it works on
a particular problem but is not yet sufficiently mature or reliable. That being
said, hyperparameter optimization is an important field of research that, while
often driven primarily by the needs of deep learning, holds the potential to benefit
not only the entire field of machine learning but the discipline of engineering in
general.
One drawback common to most hyperparameter optimization algorithms with
more sophistication than random search is that they require for a training ex-
periment to run to completion before they are able to extract any information
from the experiment. This is much less efficient, in the sense of how much infor-
mation can be gleaned early in an experiment, than manual search by a human
practitioner, since one can usually tell early on if some set of hyperparameters is
completely pathological. ( ) have introduced an early version
Swersky et al. 2014
of an algorithm that maintains a set of multiple experiments. At various time
points, the hyperparameter optimization algorithm can choose to begin a new
experiment, to “freeze” a running experiment that is not promising, or to “thaw”
and resume an experiment that was earlier frozen but now appears promising given
more information.
11.5 Debugging Strategies
When a machine learning system performs poorly, it is usually difficult to tell
whether the poor performance is intrinsic to the algorithm itself or whether there
is a bug in the implementation of the algorithm. Machine learning systems are
difficult to debug for a variety of reasons.
In most cases, we do not know a priori what the intended behavior of the
algorithm is. In fact, the entire point of using machine learning is that it will
discover useful behavior that we were not able to specify ourselves. If we train a
436
CHAPTER 11. PRACTICAL METHODOLOGY
neural network on a classification task and it achieves 5% test error, we have
new
no straightforward way of knowing if this is the expected behavior or sub-optimal
behavior.
A further difficulty is that most machine learning models have multiple parts
that are each adaptive. If one part is broken, the other parts can adapt and still
achieve roughly acceptable performance. For example, suppose that we are training
a neural net with several layers parametrized by weights W and biases b. Suppose
further that we have manually implemented the gradient descent rule for each
parameter separately, and we made an error in the update for the biases:
b b
← − α (11.4)
where α is the learning rate. This erroneous update does not use the gradient at
all. It causes the biases to constantly become negative throughout learning, which
is clearly not a correct implementation of any reasonable learning algorithm. The
bug may not be apparent just from examining the output of the model though.
Depending on the distribution of the input, the weights may be able to adapt to
compensate for the negative biases.
Most debugging strategies for neural nets are designed to get around one or
both of these two difficulties. Either we design a case that is so simple that the
correct behavior actually can be predicted, or we design a test that exercises one
part of the neural net implementation in isolation.
Some important debugging tests include:
Visualize the model in action : When training a model to detect objects in
images, view some images with the detections proposed by the model displayed
superimposed on the image. When training a generative model of speech, listen to
some of the speech samples it produces. This may seem obvious, but it is easy to
fall into the practice of only looking at quantitative performance measurements
like accuracy or log-likelihood. Directly observing the machine learning model
performing its task will help to determine whether the quantitative performance
numbers it achieves seem reasonable. Evaluation bugs can be some of the most
devastating bugs because they can mislead you into believing your system is
performing well when it is not.
Visualize the worst mistakes : Most models are able to output some sort of
confidence measure for the task they perform. For example, classifiers based on a
softmax output layer assign a probability to each class. The probability assigned
to the most likely class thus gives an estimate of the confidence the model has in
its classification decision. Typically, maximum likelihood training results in these
values being overestimates rather than accurate probabilities of correct prediction,
437
CHAPTER 11. PRACTICAL METHODOLOGY
but they are somewhat useful in the sense that examples that are actually less
likely to be correctly labeled receive smaller probabilities under the model. By
viewing the training set examples that are the hardest to model correctly, one can
often discover problems with the way the data has been preprocessed or labeled.
For example, the Street View transcription system originally had a problem where
the address number detection system would crop the image too tightly and omit
some of the digits. The transcription network then assigned very low probability
to the correct answer on these images. Sorting the images to identify the most
confident mistakes showed that there was a systematic problem with the cropping.
Modifying the detection system to crop much wider images resulted in much better
performance of the overall system, even though the transcription network needed
to be able to process greater variation in the position and scale of the address
numbers.
Reasoning about software using train and test error: It is often difficult to
determine whether the underlying software is correctly implemented. Some clues
can be obtained from the train and test error. If training error is low but test error
is high, then it is likely that that the training procedure works correctly, and the
model is overfitting for fundamental algorithmic reasons. An alternative possibility
is that the test error is measured incorrectly due to a problem with saving the
model after training then reloading it for test set evaluation, or if the test data
was prepared differently from the training data. If both train and test error are
high, then it is difficult to determine whether there is a software defect or whether
the model is underfitting due to fundamental algorithmic reasons. This scenario
requires further tests, described next.
Fit a tiny dataset: If you have high error on the training set, determine whether
it is due to genuine underfitting or due to a software defect. Usually even small
models can be guaranteed to be able fit a sufficiently small dataset. For example,
a classification dataset with only one example can be fit just by setting the biases
of the output layer correctly. Usually if you cannot train a classifier to correctly
label a single example, an autoencoder to successfully reproduce a single example
with high fidelity, or a generative model to consistently emit samples resembling a
single example, there is a software defect preventing successful optimization on the
training set. This test can be extended to a small dataset with few examples.
Compare back-propagated derivatives to numerical derivatives: If you are using
a software framework that requires you to implement your own gradient com-
putations, or if you are adding a new operation to a differentiation library and
must define its bprop method, then a common source of error is implementing this
gradient expression incorrectly. One way to verify that these derivatives are correct
438
CHAPTER 11. PRACTICAL METHODOLOGY
is to compare the derivatives computed by your implementation of automatic
differentiation to the derivatives computed by a . Because
finite differences
f
( ) = lim
x
→0
f x  f x
( + ) − ( )

, (11.5)
we can approximate the derivative by using a small, finite :

f
( )
x ≈
f x  f x
( + ) − ( )

. (11.6)
We can improve the accuracy of the approximation by using the centered differ-
ence:
f
( )
x ≈
f x
( + 1
2
 f x
) − ( − 1
2 )

. (11.7)
The perturbation size  must chosen to be large enough to ensure that the pertur-
bation is not rounded down too much by finite-precision numerical computations.
Usually, we will want to test the gradient or Jacobian of a vector-valued function
g : Rm → Rn
. Unfortunately, finite differencing only allows us to take a single
derivative at a time. We can either run finite differencing mn times to evaluate all
of the partial derivatives of g, or we can apply the test to a new function that uses
random projections at both the input and output of g. For example, we can apply
our test of the implementation of the derivatives to f(x) where f(x) = uT g(vx),
where u and v are randomly chosen vectors. Computing f(x) correctly requires
being able to back-propagate through g correctly, yet is efficient to do with finite
differences because f has only a single input and a single output. It is usually
a good idea to repeat this test for more than one value of u and v to reduce
the chance that the test overlooks mistakes that are orthogonal to the random
projection.
If one has access to numerical computation on complex numbers, then there is
a very efficient way to numerically estimate the gradient by using complex numbers
as input to the function (Squire and Trapp 1998
, ). The method is based on the
observation that
f x i f x if
( + ) = ( ) + 
( ) + (
x O 2
) (11.8)
real( ( + )) = ( ) + (
f x i f x O 2
) imag(
,
f x i
( + )

) = f
( ) + (
x O 2
), (11.9)
where i =
√
−1. Unlike in the real-valued case above, there is no cancellation effect
due to taking the difference between the value of f at different points. This allows
the use of tiny values of  like  = 10−150
, which make the O(2
) error insignificant
for all practical purposes.
439
CHAPTER 11. PRACTICAL METHODOLOGY
Monitor histograms of activations and gradient: It is often useful to visualize
statistics of neural network activations and gradients, collected over a large amount
of training iterations (maybe one epoch). The pre-activation value of hidden units
can tell us if the units saturate, or how often they do. For example, for rectifiers,
how often are they off? Are there units that are always off? For tanh units,
the average of the absolute value of the pre-activations tells us how saturated
the unit is. In a deep network where the propagated gradients quickly grow or
quickly vanish, optimization may be hampered. Finally, it is useful to compare the
magnitude of parameter gradients to the magnitude of the parameters themselves.
As suggested by ( ), we would like the magnitude of parameter updates
Bottou 2015
over a minibatch to represent something like 1% of the magnitude of the parameter,
not 50% or 0.001% (which would make the parameters move too slowly). It may
be that some groups of parameters are moving at a good pace while others are
stalled. When the data is sparse (like in natural language), some parameters may
be very rarely updated, and this should be kept in mind when monitoring their
evolution.
Finally, many deep learning algorithms provide some sort of guarantee about
the results produced at each step. For example, in part , we will see some approx-
III
imate inference algorithms that work by using algebraic solutions to optimization
problems. Typically these can be debugged by testing each of their guarantees.
Some guarantees that some optimization algorithms offer include that the objective
function will never increase after one step of the algorithm, that the gradient with
respect to some subset of variables will be zero after each step of the algorithm,
and that the gradient with respect to all variables will be zero at convergence.
Usually due to rounding error, these conditions will not hold exactly in a digital
computer, so the debugging test should include some tolerance parameter.
11.6 Example: Multi-Digit Number Recognition
To provide an end-to-end description of how to apply our design methodology
in practice, we present a brief account of the Street View transcription system,
from the point of view of designing the deep learning components. Obviously,
many other components of the complete system, such as the Street View cars, the
database infrastructure, and so on, were of paramount importance.
From the point of view of the machine learning task, the process began with
data collection. The cars collected the raw data and human operators provided
labels. The transcription task was preceded by a significant amount of dataset
curation, including using other machine learning techniques to detect the house
440
CHAPTER 11. PRACTICAL METHODOLOGY
numbers prior to transcribing them.
The transcription project began with a choice of performance metrics and
desired values for these metrics. An important general principle is to tailor the
choice of metric to the business goals for the project. Because maps are only useful
if they have high accuracy, it was important to set a high accuracy requirement
for this project. Specifically, the goal was to obtain human-level, 98% accuracy.
This level of accuracy may not always be feasible to obtain. In order to reach
this level of accuracy, the Street View transcription system sacrifices coverage.
Coverage thus became the main performance metric optimized during the project,
with accuracy held at 98%. As the convolutional network improved, it became
possible to reduce the confidence threshold below which the network refuses to
transcribe the input, eventually exceeding the goal of 95% coverage.
After choosing quantitative goals, the next step in our recommended methodol-
ogy is to rapidly establish a sensible baseline system. For vision tasks, this means a
convolutional network with rectified linear units. The transcription project began
with such a model. At the time, it was not common for a convolutional network
to output a sequence of predictions. In order to begin with the simplest possible
baseline, the first implementation of the output layer of the model consisted of n
different softmax units to predict a sequence of n characters. These softmax units
were trained exactly the same as if the task were classification, with each softmax
unit trained independently.
Our recommended methodology is to iteratively refine the baseline and test
whether each change makes an improvement. The first change to the Street View
transcription system was motivated by a theoretical understanding of the coverage
metric and the structure of the data. Specifically, the network refuses to classify
an input x whenever the probability of the output sequence p(y x
| ) < t for
some threshold t. Initially, the definition of p(y x
| ) was ad-hoc, based on simply
multiplying all of the softmax outputs together. This motivated the development
of a specialized output layer and cost function that actually computed a principled
log-likelihood. This approach allowed the example rejection mechanism to function
much more effectively.
At this point, coverage was still below 90%, yet there were no obvious theoretical
problems with the approach. Our methodology therefore suggests to instrument
the train and test set performance in order to determine whether the problem
is underfitting or overfitting. In this case, train and test set error were nearly
identical. Indeed, the main reason this project proceeded so smoothly was the
availability of a dataset with tens of millions of labeled examples. Because train
and test set error were so similar, this suggested that the problem was either due
441
CHAPTER 11. PRACTICAL METHODOLOGY
to underfitting or due to a problem with the training data. One of the debugging
strategies we recommend is to visualize the model’s worst errors. In this case, that
meant visualizing the incorrect training set transcriptions that the model gave the
highest confidence. These proved to mostly consist of examples where the input
image had been cropped too tightly, with some of the digits of the address being
removed by the cropping operation. For example, a photo of an address “1849”
might be cropped too tightly, with only the “849” remaining visible. This problem
could have been resolved by spending weeks improving the accuracy of the address
number detection system responsible for determining the cropping regions. Instead,
the team took a much more practical decision, to simply expand the width of the
crop region to be systematically wider than the address number detection system
predicted. This single change added ten percentage points to the transcription
system’s coverage.
Finally, the last few percentage points of performance came from adjusting
hyperparameters. This mostly consisted of making the model larger while main-
taining some restrictions on its computational cost. Because train and test error
remained roughly equal, it was always clear that any performance deficits were due
to underfitting, as well as due to a few remaining problems with the dataset itself.
Overall, the transcription project was a great success, and allowed hundreds of
millions of addresses to be transcribed both faster and at lower cost than would
have been possible via human effort.
We hope that the design principles described in this chapter will lead to many
other similar successes.
442
Chapter 12
Applications
In this chapter, we describe how to use deep learning to solve applications in com-
puter vision, speech recognition, natural language processing, and other application
areas of commercial interest. We begin by discussing the large scale neural network
implementations required for most serious AI applications. Next, we review several
specific application areas that deep learning has been used to solve. While one
goal of deep learning is to design algorithms that are capable of solving a broad
variety of tasks, so far some degree of specialization is needed. For example, vision
tasks require processing a large number of input features (pixels) per example.
Language tasks require modeling a large number of possible values (words in the
vocabulary) per input feature.
12.1 Large-Scale Deep Learning
Deep learning is based on the philosophy of connectionism: while an individual
biological neuron or an individual feature in a machine learning model is not
intelligent, a large population of these neurons or features acting together can
exhibit intelligent behavior. It truly is important to emphasize the fact that the
number of neurons must be large. One of the key factors responsible for the
improvement in neural network’s accuracy and the improvement of the complexity
of tasks they can solve between the 1980s and today is the dramatic increase in
the size of the networks we use. As we saw in section , network sizes have
1.2.3
grown exponentially for the past three decades, yet artificial neural networks are
only as large as the nervous systems of insects.
Because the size of neural networks is of paramount importance, deep learning
443
CHAPTER 12. APPLICATIONS
requires high performance hardware and software infrastructure.
12.1.1 Fast CPU Implementations
Traditionally, neural networks were trained using the CPU of a single machine.
Today, this approach is generally considered insufficient. We now mostly use GPU
computing or the CPUs of many machines networked together. Before moving to
these expensive setups, researchers worked hard to demonstrate that CPUs could
not manage the high computational workload required by neural networks.
A description of how to implement efficient numerical CPU code is beyond
the scope of this book, but we emphasize here that careful implementation for
specific CPU families can yield large improvements. For example, in 2011, the best
CPUs available could run neural network workloads faster when using fixed-point
arithmetic rather than floating-point arithmetic. By creating a carefully tuned fixed-
point implementation, Vanhoucke 2011
et al. ( ) obtained a threefold speedup over
a strong floating-point system. Each new model of CPU has different performance
characteristics, so sometimes floating-point implementations can be faster too.
The important principle is that careful specialization of numerical computation
routines can yield a large payoff. Other strategies, besides choosing whether to use
fixed or floating point, include optimizing data structures to avoid cache misses
and using vector instructions. Many machine learning researchers neglect these
implementation details, but when the performance of an implementation restricts
the size of the model, the accuracy of the model suffers.
12.1.2 GPU Implementations
Most modern neural network implementations are based on graphics processing
units. Graphics processing units (GPUs) are specialized hardware components
that were originally developed for graphics applications. The consumer market for
video gaming systems spurred development of graphics processing hardware. The
performance characteristics needed for good video gaming systems turn out to be
beneficial for neural networks as well.
Video game rendering requires performing many operations in parallel quickly.
Models of characters and environments are specified in terms of lists of 3-D
coordinates of vertices. Graphics cards must perform matrix multiplication and
division on many vertices in parallel to convert these 3-D coordinates into 2-D
on-screen coordinates. The graphics card must then perform many computations
at each pixel in parallel to determine the color of each pixel. In both cases, the
444
CHAPTER 12. APPLICATIONS
computations are fairly simple and do not involve much branching compared to
the computational workload that a CPU usually encounters. For example, each
vertex in the same rigid object will be multiplied by the same matrix; there is no
need to evaluate an if statement per-vertex to determine which matrix to multiply
by. The computations are also entirely independent of each other, and thus may
be parallelized easily. The computations also involve processing massive buffers of
memory, containing bitmaps describing the texture (color pattern) of each object
to be rendered. Together, this results in graphics cards having been designed to
have a high degree of parallelism and high memory bandwidth, at the cost of
having a lower clock speed and less branching capability relative to traditional
CPUs.
Neural network algorithms require the same performance characteristics as the
real-time graphics algorithms described above. Neural networks usually involve
large and numerous buffers of parameters, activation values, and gradient values,
each of which must be completely updated during every step of training. These
buffers are large enough to fall outside the cache of a traditional desktop computer
so the memory bandwidth of the system often becomes the rate limiting factor.
GPUs offer a compelling advantage over CPUs due to their high memory bandwidth.
Neural network training algorithms typically do not involve much branching or
sophisticated control, so they are appropriate for GPU hardware. Since neural
networks can be divided into multiple individual “neurons” that can be processed
independently from the other neurons in the same layer, neural networks easily
benefit from the parallelism of GPU computing.
GPU hardware was originally so specialized that it could only be used for
graphics tasks. Over time, GPU hardware became more flexible, allowing custom
subroutines to be used to transform the coordinates of vertices or assign colors to
pixels. In principle, there was no requirement that these pixel values actually be
based on a rendering task. These GPUs could be used for scientific computing by
writing the output of a computation to a buffer of pixel values. Steinkrau et al.
( ) implemented a two-layer fully connected neural network on a GPU and
2005
reported a threefold speedup over their CPU-based baseline. Shortly thereafter,
Chellapilla 2006
et al. ( ) demonstrated that the same technique could be used to
accelerate supervised convolutional networks.
The popularity of graphics cards for neural network training exploded after
the advent of general purpose GPUs. These GP-GPUs could execute arbitrary
code, not just rendering subroutines. NVIDIA’s CUDA programming language
provided a way to write this arbitrary code in a C-like language. With their
relatively convenient programming model, massive parallelism, and high memory
445
CHAPTER 12. APPLICATIONS
bandwidth, GP-GPUs now offer an ideal platform for neural network programming.
This platform was rapidly adopted by deep learning researchers soon after it became
available ( , ; , ).
Raina et al. 2009 Ciresan et al. 2010
Writing efficient code for GP-GPUs remains a difficult task best left to spe-
cialists. The techniques required to obtain good performance on GPU are very
different from those used on CPU. For example, good CPU-based code is usually
designed to read information from the cache as much as possible. On GPU, most
writable memory locations are not cached, so it can actually be faster to compute
the same value twice, rather than compute it once and read it back from memory.
GPU code is also inherently multi-threaded and the different threads must be
coordinated with each other carefully. For example, memory operations are faster if
they can be coalesced. Coalesced reads or writes occur when several threads can
each read or write a value that they need simultaneously, as part of a single memory
transaction. Different models of GPUs are able to coalesce different kinds of read
or write patterns. Typically, memory operations are easier to coalesce if among n
threads, thread i accesses byte i + j of memory, and j is a multiple of some power
of 2. The exact specifications differ between models of GPU. Another common
consideration for GPUs is making sure that each thread in a group executes the
same instruction simultaneously. This means that branching can be difficult on
GPU. Threads are divided into small groups called warps. Each thread in a warp
executes the same instruction during each cycle, so if different threads within the
same warp need to execute different code paths, these different code paths must
be traversed sequentially rather than in parallel.
Due to the difficulty of writing high performance GPU code, researchers should
structure their workflow to avoid needing to write new GPU code in order to test
new models or algorithms. Typically, one can do this by building a software library
of high performance operations like convolution and matrix multiplication, then
specifying models in terms of calls to this library of operations. For example, the
machine learning library Pylearn2 (Goodfellow 2013c
et al., ) specifies all of its
machine learning algorithms in terms of calls to Theano ( , ;
Bergstra et al. 2010
Bastien 2012
et al., ) and cuda-convnet ( , ), which provide these
Krizhevsky 2010
high-performance operations. This factored approach can also ease support for
multiple kinds of hardware. For example, the same Theano program can run on
either CPU or GPU, without needing to change any of the calls to Theano itself.
Other libraries like TensorFlow ( , ) and Torch ( ,
Abadi et al. 2015 Collobert et al.
2011b) provide similar features.
446
CHAPTER 12. APPLICATIONS
12.1.3 Large-Scale Distributed Implementations
In many cases, the computational resources available on a single machine are
insufficient. We therefore want to distribute the workload of training and inference
across many machines.
Distributing inference is simple, because each input example we want to process
can be run by a separate machine. This is known as .
data parallelism
It is also possible to get model parallelism, where multiple machines work
together on a single datapoint, with each machine running a different part of the
model. This is feasible for both inference and training.
Data parallelism during training is somewhat harder. We can increase the size
of the minibatch used for a single SGD step, but usually we get less than linear
returns in terms of optimization performance. It would be better to allow multiple
machines to compute multiple gradient descent steps in parallel. Unfortunately,
the standard definition of gradient descent is as a completely sequential algorithm:
the gradient at step is a function of the parameters produced by step .
t t − 1
This can be solved using asynchronous stochastic gradient descent (Ben-
gio 2001 Recht 2011
et al., ; et al., ). In this approach, several processor cores share
the memory representing the parameters. Each core reads parameters without a
lock, then computes a gradient, then increments the parameters without a lock.
This reduces the average amount of improvement that each gradient descent step
yields, because some of the cores overwrite each other’s progress, but the increased
rate of production of steps causes the learning process to be faster overall. Dean
et al. ( ) pioneered the multi-machine implementation of this lock-free approach
2012
to gradient descent, where the parameters are managed by a parameter server
rather than stored in shared memory. Distributed asynchronous gradient descent
remains the primary strategy for training large deep networks and is used by
most major deep learning groups in industry ( , ;
Chilimbi et al. 2014 Wu et al.,
2015). Academic deep learning researchers typically cannot afford the same scale
of distributed learning systems but some research has focused on how to build
distributed networks with relatively low-cost hardware available in the university
setting ( , ).
Coates et al. 2013
12.1.4 Model Compression
In many commercial applications, it is much more important that the time and
memory cost of running inference in a machine learning model be low than that
the time and memory cost of training be low. For applications that do not require
447
CHAPTER 12. APPLICATIONS
personalization, it is possible to train a model once, then deploy it to be used by
billions of users. In many cases, the end user is more resource-constrained than
the developer. For example, one might train a speech recognition network with a
powerful computer cluster, then deploy it on mobile phones.
A key strategy for reducing the cost of inference is model compression (Bu-
ciluǎ 2006
et al., ). The basic idea of model compression is to replace the original,
expensive model with a smaller model that requires less memory and runtime to
store and evaluate.
Model compression is applicable when the size of the original model is driven
primarily by a need to prevent overfitting. In most cases, the model with the
lowest generalization error is an ensemble of several independently trained models.
Evaluating all n ensemble members is expensive. Sometimes, even a single model
generalizes better if it is large (for example, if it is regularized with dropout).
These large models learn some function f(x), but do so using many more
parameters than are necessary for the task. Their size is necessary only due to
the limited number of training examples. As soon as we have fit this function
f(x), we can generate a training set containing infinitely many examples, simply
by applying f to randomly sampled points x. We then train the new, smaller,
model to match f(x) on these points. In order to most efficiently use the capacity
of the new, small model, it is best to sample the new x points from a distribution
resembling the actual test inputs that will be supplied to the model later. This can
be done by corrupting training examples or by drawing points from a generative
model trained on the original training set.
Alternatively, one can train the smaller model only on the original training
points, but train it to copy other features of the model, such as its posterior
distribution over the incorrect classes (Hinton 2014 2015
et al., , ).
12.1.5 Dynamic Structure
One strategy for accelerating data processing systems in general is to build systems
that have dynamic structure in the graph describing the computation needed
to process an input. Data processing systems can dynamically determine which
subset of many neural networks should be run on a given input. Individual neural
networks can also exhibit dynamic structure internally by determining which subset
of features (hidden units) to compute given information from the input. This
form of dynamic structure inside neural networks is sometimes called conditional
computation ( , ; , ). Since many components of
Bengio 2013 Bengio et al. 2013b
the architecture may be relevant only for a small amount of possible inputs, the
448
CHAPTER 12. APPLICATIONS
system can run faster by computing these features only when they are needed.
Dynamic structure of computations is a basic computer science principle applied
generally throughout the software engineering discipline. The simplest versions
of dynamic structure applied to neural networks are based on determining which
subset of some group of neural networks (or other machine learning models) should
be applied to a particular input.
A venerable strategy for accelerating inference in a classifier is to use a cascade
of classifiers. The cascade strategy may be applied when the goal is to detect the
presence of a rare object (or event). To know for sure that the object is present,
we must use a sophisticated classifier with high capacity, that is expensive to run.
However, because the object is rare, we can usually use much less computation
to reject inputs as not containing the object. In these situations, we can train
a sequence of classifiers. The first classifiers in the sequence have low capacity,
and are trained to have high recall. In other words, they are trained to make sure
we do not wrongly reject an input when the object is present. The final classifier
is trained to have high precision. At test time, we run inference by running the
classifiers in a sequence, abandoning any example as soon as any one element in
the cascade rejects it. Overall, this allows us to verify the presence of objects with
high confidence, using a high capacity model, but does not force us to pay the cost
of full inference for every example. There are two different ways that the cascade
can achieve high capacity. One way is to make the later members of the cascade
individually have high capacity. In this case, the system as a whole obviously has
high capacity, because some of its individual members do. It is also possible to
make a cascade in which every individual model has low capacity but the system
as a whole has high capacity due to the combination of many small models. Viola
and Jones 2001
( ) used a cascade of boosted decision trees to implement a fast and
robust face detector suitable for use in handheld digital cameras. Their classifier
localizes a face using essentially a sliding window approach in which many windows
are examined and rejected if they do not contain faces. Another version of cascades
uses the earlier models to implement a sort of hard attention mechanism: the
early members of the cascade localize an object and later members of the cascade
perform further processing given the location of the object. For example, Google
transcribes address numbers from Street View imagery using a two-step cascade
that first locates the address number with one machine learning model and then
transcribes it with another (Goodfellow 2014d
et al., ).
Decision trees themselves are an example of dynamic structure, because each
node in the tree determines which of its subtrees should be evaluated for each input.
A simple way to accomplish the union of deep learning and dynamic structure
449
CHAPTER 12. APPLICATIONS
is to train a decision tree in which each node uses a neural network to make the
splitting decision ( , ), though this has typically not been
Guo and Gelfand 1992
done with the primary goal of accelerating inference computations.
In the same spirit, one can use a neural network, called the gater to select
which one out of several expert networks will be used to compute the output,
given the current input. The first version of this idea is called the mixture of
experts (Nowlan 1990 Jacobs 1991
, ; et al., ), in which the gater outputs a set
of probabilities or weights (obtained via a softmax nonlinearity), one per expert,
and the final output is obtained by the weighted combination of the output of
the experts. In that case, the use of the gater does not offer a reduction in
computational cost, but if a single expert is chosen by the gater for each example,
we obtain the hard mixture of experts ( , , ), which
Collobert et al. 2001 2002
can considerably accelerate training and inference time. This strategy works well
when the number of gating decisions is small because it is not combinatorial. But
when we want to select different subsets of units or parameters, it is not possible
to use a “soft switch” because it requires enumerating (and computing outputs for)
all the gater configurations. To deal with this problem, several approaches have
been explored to train combinatorial gaters. ( ) experiment with
Bengio et al. 2013b
several estimators of the gradient on the gating probabilities, while Bacon et al.
( ) and ( ) use reinforcement learning techniques (policy
2015 Bengio et al. 2015a
gradient) to learn a form of conditional dropout on blocks of hidden units and get
an actual reduction in computational cost without impacting negatively on the
quality of the approximation.
Another kind of dynamic structure is a switch, where a hidden unit can
receive input from different units depending on the context. This dynamic routing
approach can be interpreted as an attention mechanism ( , ).
Olshausen et al. 1993
So far, the use of a hard switch has not proven effective on large-scale applications.
Contemporary approaches instead use a weighted average over many possible inputs,
and thus do not achieve all of the possible computational benefits of dynamic
structure. Contemporary attention mechanisms are described in section .
12.4.5.1
One major obstacle to using dynamically structured systems is the decreased
degree of parallelism that results from the system following different code branches
for different inputs. This means that few operations in the network can be described
as matrix multiplication or batch convolution on a minibatch of examples. We
can write more specialized sub-routines that convolve each example with different
kernels or multiply each row of a design matrix by a different set of columns
of weights. Unfortunately, these more specialized subroutines are difficult to
implement efficiently. CPU implementations will be slow due to the lack of cache
450
CHAPTER 12. APPLICATIONS
coherence and GPU implementations will be slow due to the lack of coalesced
memory transactions and the need to serialize warps when members of a warp take
different branches. In some cases, these issues can be mitigated by partitioning the
examples into groups that all take the same branch, and processing these groups
of examples simultaneously. This can be an acceptable strategy for minimizing
the time required to process a fixed amount of examples in an offline setting. In
a real-time setting where examples must be processed continuously, partitioning
the workload can result in load-balancing issues. For example, if we assign one
machine to process the first step in a cascade and another machine to process
the last step in a cascade, then the first will tend to be overloaded and the last
will tend to be underloaded. Similar issues arise if each machine is assigned to
implement different nodes of a neural decision tree.
12.1.6 Specialized Hardware Implementations of Deep Networks
Since the early days of neural networks research, hardware designers have worked
on specialized hardware implementations that could speed up training and/or
inference of neural network algorithms. See early and more recent reviews of
specialized hardware for deep networks ( , ; ,
Lindsey and Lindblad 1994 Beiu et al.
2003 Misra and Saha 2010
; , ).
Different forms of specialized hardware (Graf and Jackel 1989 Mead and
, ;
Ismail 2012 Kim 2009 Pham 2012 Chen 2014a b
, ; et al., ; et al., ; et al., , ) have
been developed over the last decades, either with ASICs (application-specific inte-
grated circuit), either with digital (based on binary representations of numbers),
analog (Graf and Jackel 1989 Mead and Ismail 2012
, ; , ) (based on physical imple-
mentations of continuous values as voltages or currents) or hybrid implementations
(combining digital and analog components). In recent years more flexible FPGA
(field programmable gated array) implementations (where the particulars of the
circuit can be written on the chip after it has been built) have been developed.
Though software implementations on general-purpose processing units (CPUs
and GPUs) typically use 32 or 64 bits of precision to represent floating point
numbers, it has long been known that it was possible to use less precision, at
least at inference time (Holt and Baker 1991 Holi and Hwang 1993 Presley
, ; , ;
and Haggard 1994 Simard and Graf 1994 Wawrzynek 1996 Savich
, ; , ; et al., ; et al.,
2007). This has become a more pressing issue in recent years as deep learning
has gained in popularity in industrial products, and as the great impact of faster
hardware was demonstrated with GPUs. Another factor that motivates current
research on specialized hardware for deep networks is that the rate of progress of
a single CPU or GPU core has slowed down, and most recent improvements in
451
CHAPTER 12. APPLICATIONS
computing speed have come from parallelization across cores (either in CPUs or
GPUs). This is very different from the situation of the 1990s (the previous neural
network era) where the hardware implementations of neural networks (which might
take two years from inception to availability of a chip) could not keep up with
the rapid progress and low prices of general-purpose CPUs. Building specialized
hardware is thus a way to push the envelope further, at a time when new hardware
designs are being developed for low-power devices such as phones, aiming for
general-public applications of deep learning (e.g., with speech, computer vision or
natural language).
Recent work on low-precision implementations of backprop-based neural nets
(Vanhoucke 2011 Courbariaux 2015 Gupta 2015
et al., ; et al., ; et al., ) suggests
that between 8 and 16 bits of precision can suffice for using or training deep
neural networks with back-propagation. What is clear is that more precision is
required during training than at inference time, and that some forms of dynamic
fixed point representation of numbers can be used to reduce how many bits are
required per number. Traditional fixed point numbers are restricted to a fixed
range (which corresponds to a given exponent in a floating point representation).
Dynamic fixed point representations share that range among a set of numbers
(such as all the weights in one layer). Using fixed point rather than floating point
representations and using less bits per number reduces the hardware surface area,
power requirements and computing time needed for performing multiplications,
and multiplications are the most demanding of the operations needed to use or
train a modern deep network with backprop.
12.2 Computer Vision
Computer vision has traditionally been one of the most active research areas for
deep learning applications, because vision is a task that is effortless for humans
and many animals but challenging for computers ( , ). Many of
Ballard et al. 1983
the most popular standard benchmark tasks for deep learning algorithms are forms
of object recognition or optical character recognition.
Computer vision is a very broad field encompassing a wide variety of ways
of processing images, and an amazing diversity of applications. Applications of
computer vision range from reproducing human visual abilities, such as recognizing
faces, to creating entirely new categories of visual abilities. As an example of
the latter category, one recent computer vision application is to recognize sound
waves from the vibrations they induce in objects visible in a video ( ,
Davis et al.
2014). Most deep learning research on computer vision has not focused on such
452
CHAPTER 12. APPLICATIONS
exotic applications that expand the realm of what is possible with imagery but
rather a small core of AI goals aimed at replicating human abilities. Most deep
learning for computer vision is used for object recognition or detection of some
form, whether this means reporting which object is present in an image, annotating
an image with bounding boxes around each object, transcribing a sequence of
symbols from an image, or labeling each pixel in an image with the identity of the
object it belongs to. Because generative modeling has been a guiding principle
of deep learning research, there is also a large body of work on image synthesis
using deep models. While image synthesis is usually not considered a
ex nihilo
computer vision endeavor, models capable of image synthesis are usually useful for
image restoration, a computer vision task involving repairing defects in images or
removing objects from images.
12.2.1 Preprocessing
Many application areas require sophisticated preprocessing because the original
input comes in a form that is difficult for many deep learning architectures to
represent. Computer vision usually requires relatively little of this kind of pre-
processing. The images should be standardized so that their pixels all lie in the
same, reasonable range, like [0,1] or [-1, 1]. Mixing images that lie in [0,1] with
images that lie in [0, 255] will usually result in failure. Formatting images to have
the same scale is the only kind of preprocessing that is strictly necessary. Many
computer vision architectures require images of a standard size, so images must be
cropped or scaled to fit that size. Even this rescaling is not always strictly necessary.
Some convolutional models accept variably-sized inputs and dynamically adjust
the size of their pooling regions to keep the output size constant (Waibel et al.,
1989). Other convolutional models have variable-sized output that automatically
scales in size with the input, such as models that denoise or label each pixel in an
image ( , ).
Hadsell et al. 2007
Dataset augmentation may be seen as a way of preprocessing the training set
only. Dataset augmentation is an excellent way to reduce the generalization error
of most computer vision models. A related idea applicable at test time is to show
the model many different versions of the same input (for example, the same image
cropped at slightly different locations) and have the different instantiations of the
model vote to determine the output. This latter idea can be interpreted as an
ensemble approach, and helps to reduce generalization error.
Other kinds of preprocessing are applied to both the train and the test set with
the goal of putting each example into a more canonical form in order to reduce the
amount of variation that the model needs to account for. Reducing the amount of
453
CHAPTER 12. APPLICATIONS
variation in the data can both reduce generalization error and reduce the size of
the model needed to fit the training set. Simpler tasks may be solved by smaller
models, and simpler solutions are more likely to generalize well. Preprocessing
of this kind is usually designed to remove some kind of variability in the input
data that is easy for a human designer to describe and that the human designer
is confident has no relevance to the task. When training with large datasets and
large models, this kind of preprocessing is often unnecessary, and it is best to just
let the model learn which kinds of variability it should become invariant to. For
example, the AlexNet system for classifying ImageNet only has one preprocessing
step: subtracting the mean across training examples of each pixel (Krizhevsky
et al., ).
2012
12.2.1.1 Contrast Normalization
One of the most obvious sources of variation that can be safely removed for
many tasks is the amount of contrast in the image. Contrast simply refers to the
magnitude of the difference between the bright and the dark pixels in an image.
There are many ways of quantifying the contrast of an image. In the context of
deep learning, contrast usually refers to the standard deviation of the pixels in an
image or region of an image. Suppose we have an image represented by a tensor
X ∈ Rr c
× ×3, with Xi,j,1 being the red intensity at row i and column j, Xi,j,2 giving
the green intensity and Xi,j,3 giving the blue intensity. Then the contrast of the
entire image is given by



 1
3rc
r

i=1
c

j=1
3

k=1

Xi,j,k − ¯
X
2
(12.1)
where X̄ is the mean intensity of the entire image:
X̄ =
1
3rc
r

i=1
c

j=1
3

k=1
Xi,j,k. (12.2)
Global contrast normalization (GCN) aims to prevent images from having
varying amounts of contrast by subtracting the mean from each image, then
rescaling it so that the standard deviation across its pixels is equal to some
constant s. This approach is complicated by the fact that no scaling factor can
change the contrast of a zero-contrast image (one whose pixels all have equal
intensity). Images with very low but non-zero contrast often have little information
content. Dividing by the true standard deviation usually accomplishes nothing
454
CHAPTER 12. APPLICATIONS
more than amplifying sensor noise or compression artifacts in such cases. This
motivates introducing a small, positive regularization parameter λ to bias the
estimate of the standard deviation. Alternately, one can constrain the denominator
to be at least . Given an input image X, GCN produces an output image X
,
defined such that
X 
i,j,k = s
Xi,j,k − X̄
max

,

λ + 1
3rc
r
i=1
c
j=1
3
k=1

Xi,j,k − X̄
2
 . (12.3)
Datasets consisting of large images cropped to interesting objects are unlikely
to contain any images with nearly constant intensity. In these cases, it is safe
to practically ignore the small denominator problem by setting λ = 0 and avoid
division by 0 in extremely rare cases by setting  to an extremely low value like
10−8. This is the approach used by ( ) on the CIFAR-10
Goodfellow et al. 2013a
dataset. Small images cropped randomly are more likely to have nearly constant
intensity, making aggressive regularization more useful. ( ) used
Coates et al. 2011
 λ
= 0 and = 10 on small, randomly selected patches drawn from CIFAR-10.
The scale parameter s can usually be set to , as done by ( ),
1 Coates et al. 2011
or chosen to make each individual pixel have standard deviation across examples
close to 1, as done by ( ).
Goodfellow et al. 2013a
The standard deviation in equation is just a rescaling of the
12.3 L2
norm
of the image (assuming the mean of the image has already been removed). It is
preferable to define GCN in terms of standard deviation rather than L2
norm
because the standard deviation includes division by the number of pixels, so GCN
based on standard deviation allows the same s to be used regardless of image
size. However, the observation that the L2 norm is proportional to the standard
deviation can help build a useful intuition. One can understand GCN as mapping
examples to a spherical shell. See figure for an illustration. This can be a
12.1
useful property because neural networks are often better at responding to directions
in space rather than exact locations. Responding to multiple distances in the
same direction requires hidden units with collinear weight vectors but different
biases. Such coordination can be difficult for the learning algorithm to discover.
Additionally, many shallow graphical models have problems with representing
multiple separated modes along the same line. GCN avoids these problems by
reducing each example to a direction rather than a direction and a distance.
Counterintuitively, there is a preprocessing operation known as sphering and
it is not the same operation as GCN. Sphering does not refer to making the data
lie on a spherical shell, but rather to rescaling the principal components to have
455
CHAPTER 12. APPLICATIONS
−1 5 0 0 1 5
. . .
x0
−1 5
.
0 0
.
1 5
.
x
1
Raw input
−1 5 0 0 1 5
. . .
x0
GCN, = 10
λ −2
−1 5 0 0 1 5
. . .
x0
GCN, = 0
λ
Figure 12.1: GCN maps examples onto a sphere. (Left)Raw input data may have any norm.
(Center)GCN with λ = 0 maps all non-zero examples perfectly onto a sphere. Here we use
s = 1 and  = 10−8. Because we use GCN based on normalizing the standard deviation
rather than the L2 norm, the resulting sphere is not the unit sphere. (Right)Regularized
GCN, with λ > 0, draws examples toward the sphere but does not completely discard the
variation in their norm. We leave and the same as before.
s 
equal variance, so that the multivariate normal distribution used by PCA has
spherical contours. Sphering is more commonly known as .
whitening
Global contrast normalization will often fail to highlight image features we
would like to stand out, such as edges and corners. If we have a scene with a large
dark area and a large bright area (such as a city square with half the image in
the shadow of a building) then global contrast normalization will ensure there is a
large difference between the brightness of the dark area and the brightness of the
light area. It will not, however, ensure that edges within the dark region stand out.
This motivates local contrast normalization. Local contrast normalization
ensures that the contrast is normalized across each small window, rather than over
the image as a whole. See figure for a comparison of global and local contrast
12.2
normalization.
Various definitions of local contrast normalization are possible. In all cases,
one modifies each pixel by subtracting a mean of nearby pixels and dividing by
a standard deviation of nearby pixels. In some cases, this is literally the mean
and standard deviation of all pixels in a rectangular window centered on the
pixel to be modified ( , ). In other cases, this is a weighted mean
Pinto et al. 2008
and weighted standard deviation using Gaussian weights centered on the pixel to
be modified. In the case of color images, some strategies process different color
456
CHAPTER 12. APPLICATIONS
Input image GCN LCN
Figure 12.2: A comparison of global and local contrast normalization. Visually, the effects
of global contrast normalization are subtle. It places all images on roughly the same
scale, which reduces the burden on the learning algorithm to handle multiple scales. Local
contrast normalization modifies the image much more, discarding all regions of constant
intensity. This allows the model to focus on just the edges. Regions of fine texture,
such as the houses in the second row, may lose some detail due to the bandwidth of the
normalization kernel being too high.
channels separately while others combine information from different channels to
normalize each pixel ( , ).
Sermanet et al. 2012
Local contrast normalization can usually be implemented efficiently by using
separable convolution (see section ) to compute feature maps of local means and
9.8
local standard deviations, then using element-wise subtraction and element-wise
division on different feature maps.
Local contrast normalization is a differentiable operation and can also be used as
a nonlinearity applied to the hidden layers of a network, as well as a preprocessing
operation applied to the input.
As with global contrast normalization, we typically need to regularize local
contrast normalization to avoid division by zero. In fact, because local contrast
normalization typically acts on smaller windows, it is even more important to
regularize. Smaller windows are more likely to contain values that are all nearly
the same as each other, and thus more likely to have zero standard deviation.
457
CHAPTER 12. APPLICATIONS
12.2.1.2 Dataset Augmentation
As described in section , it is easy to improve the generalization of a classifier
7.4
by increasing the size of the training set by adding extra copies of the training
examples that have been modified with transformations that do not change the
class. Object recognition is a classification task that is especially amenable to
this form of dataset augmentation because the class is invariant to so many
transformations and the input can be easily transformed with many geometric
operations. As described before, classifiers can benefit from random translations,
rotations, and in some cases, flips of the input to augment the dataset. In specialized
computer vision applications, more advanced transformations are commonly used
for dataset augmentation. These schemes include random perturbation of the
colors in an image ( , ) and nonlinear geometric distortions of
Krizhevsky et al. 2012
the input ( , ).
LeCun et al. 1998b
12.3 Speech Recognition
The task of speech recognition is to map an acoustic signal containing a spoken
natural language utterance into the corresponding sequence of words intended by
the speaker. Let X = (x(1), x(2), . . . , x( )
T ) denote the sequence of acoustic input
vectors (traditionally produced by splitting the audio into 20ms frames). Most
speech recognition systems preprocess the input using specialized hand-designed
features, but some ( , ) deep learning systems learn features
Jaitly and Hinton 2011
from raw input. Let y = (y1, y2, . . . , yN ) denote the target output sequence (usually
a sequence of words or characters). The automatic speech recognition (ASR)
task consists of creating a function f∗
ASR that computes the most probable linguistic
sequence given the acoustic sequence :
y X
f∗
ASR( ) = arg max
X
y
P ∗
( = )
y X
| X (12.4)
where P∗
is the true conditional distribution relating the inputs X to the targets
y.
Since the 1980s and until about 2009–2012, state-of-the art speech recognition
systems primarily combined hidden Markov models (HMMs) and Gaussian mixture
models (GMMs). GMMs modeled the association between acoustic features and
phonemes ( , ), while HMMs modeled the sequence of phonemes.
Bahl et al. 1987
The GMM-HMM model family treats acoustic waveforms as being generated
by the following process: first an HMM generates a sequence o
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf
Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf

More Related Content

PDF
Scikit learn 0.16.0 user guide
PDF
MLBOOK.pdf
PDF
book.pdf
PDF
Am06 complete 16-sep06
PDF
python learn basic tutorial learn easy..
PDF
R data mining_clear
PDF
phd-thesis
Scikit learn 0.16.0 user guide
MLBOOK.pdf
book.pdf
Am06 complete 16-sep06
python learn basic tutorial learn easy..
R data mining_clear
phd-thesis

Similar to Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf (20)

PDF
Social Media Mining _indian edition available.pdf
PDF
Social Media Mining _indian edition available.pdf
PDF
math-basics.pdf
PDF
PDF
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
PDF
Deep Learning for Computer Vision - Image Classification, Object Detection an...
PDF
Thats How We C
PDF
0802 python-tutorial
PDF
Python everthing
PDF
0802 python-tutorial
PDF
Tutorial edit
PDF
Best Python tutorial (release 3.7.0)
PDF
Dragos Datcu_PhD_Thesis
PDF
An Introduction to Computer Science - python
PDF
Exercises_in_Machine_Learning_1657514028.pdf
PDF
tutorial.pdf
PDF
pyspark.pdf
PDF
Big data-and-the-web
PDF
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
Social Media Mining _indian edition available.pdf
Social Media Mining _indian edition available.pdf
math-basics.pdf
Automatic Detection of Performance Design and Deployment Antipatterns in Comp...
Deep Learning for Computer Vision - Image Classification, Object Detection an...
Thats How We C
0802 python-tutorial
Python everthing
0802 python-tutorial
Tutorial edit
Best Python tutorial (release 3.7.0)
Dragos Datcu_PhD_Thesis
An Introduction to Computer Science - python
Exercises_in_Machine_Learning_1657514028.pdf
tutorial.pdf
pyspark.pdf
Big data-and-the-web
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
Ad

Recently uploaded (20)

PDF
Mega Projects Data Mega Projects Data
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
Computer network topology notes for revision
PPTX
Introduction to machine learning and Linear Models
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPT
Quality review (1)_presentation of this 21
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Introduction to Knowledge Engineering Part 1
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
annual-report-2024-2025 original latest.
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Mega Projects Data Mega Projects Data
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
Computer network topology notes for revision
Introduction to machine learning and Linear Models
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Quality review (1)_presentation of this 21
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Introduction to Knowledge Engineering Part 1
.pdf is not working space design for the following data for the following dat...
Business Ppt On Nestle.pptx huunnnhhgfvu
IB Computer Science - Internal Assessment.pptx
Fluorescence-microscope_Botany_detailed content
Supervised vs unsupervised machine learning algorithms
annual-report-2024-2025 original latest.
Data_Analytics_and_PowerBI_Presentation.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Ad

Deep learning_ adaptive computation and machine learning ( PDFDrive ).pdf

  • 2. Deep Learning Ian Goodfellow Yoshua Bengio Aaron Courville
  • 3. Contents Website vii Acknowledgments viii Notation xi 1 Introduction 1 1.1 Who Should Read This Book? . . . . . . . . . . . . . . . . . . . . 8 1.2 Historical Trends in Deep Learning . . . . . . . . . . . . . . . . . 11 I Applied Math and Machine Learning Basics 29 2 Linear Algebra 31 2.1 Scalars, Vectors, Matrices and Tensors . . . . . . . . . . . . . . . 31 2.2 Multiplying Matrices and Vectors . . . . . . . . . . . . . . . . . . 34 2.3 Identity and Inverse Matrices . . . . . . . . . . . . . . . . . . . . 36 2.4 Linear Dependence and Span . . . . . . . . . . . . . . . . . . . . 37 2.5 Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.6 Special Kinds of Matrices and Vectors . . . . . . . . . . . . . . . 40 2.7 Eigendecomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.8 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 44 2.9 The Moore-Penrose Pseudoinverse . . . . . . . . . . . . . . . . . . 45 2.10 The Trace Operator . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.11 The Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.12 Example: Principal Components Analysis . . . . . . . . . . . . . 48 3 Probability and Information Theory 53 3.1 Why Probability? . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 i
  • 4. CONTENTS 3.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3 Probability Distributions . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Marginal Probability . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.5 Conditional Probability . . . . . . . . . . . . . . . . . . . . . . . 59 3.6 The Chain Rule of Conditional Probabilities . . . . . . . . . . . . 59 3.7 Independence and Conditional Independence . . . . . . . . . . . . 60 3.8 Expectation, Variance and Covariance . . . . . . . . . . . . . . . 60 3.9 Common Probability Distributions . . . . . . . . . . . . . . . . . 62 3.10 Useful Properties of Common Functions . . . . . . . . . . . . . . 67 3.11 Bayes’ Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.12 Technical Details of Continuous Variables . . . . . . . . . . . . . 71 3.13 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.14 Structured Probabilistic Models . . . . . . . . . . . . . . . . . . . 75 4 Numerical Computation 80 4.1 Overflow and Underflow . . . . . . . . . . . . . . . . . . . . . . . 80 4.2 Poor Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3 Gradient-Based Optimization . . . . . . . . . . . . . . . . . . . . 82 4.4 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . 93 4.5 Example: Linear Least Squares . . . . . . . . . . . . . . . . . . . 96 5 Machine Learning Basics 98 5.1 Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.2 Capacity, Overfitting and Underfitting . . . . . . . . . . . . . . . 110 5.3 Hyperparameters and Validation Sets . . . . . . . . . . . . . . . . 120 5.4 Estimators, Bias and Variance . . . . . . . . . . . . . . . . . . . . 122 5.5 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . 131 5.6 Bayesian Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 135 5.7 Supervised Learning Algorithms . . . . . . . . . . . . . . . . . . . 140 5.8 Unsupervised Learning Algorithms . . . . . . . . . . . . . . . . . 146 5.9 Stochastic Gradient Descent . . . . . . . . . . . . . . . . . . . . . 151 5.10 Building a Machine Learning Algorithm . . . . . . . . . . . . . . 153 5.11 Challenges Motivating Deep Learning . . . . . . . . . . . . . . . . 155 II Deep Networks: Modern Practices 166 6 Deep Feedforward Networks 168 6.1 Example: Learning XOR . . . . . . . . . . . . . . . . . . . . . . . 171 6.2 Gradient-Based Learning . . . . . . . . . . . . . . . . . . . . . . . 177 ii
  • 5. CONTENTS 6.3 Hidden Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 6.4 Architecture Design . . . . . . . . . . . . . . . . . . . . . . . . . . 197 6.5 Back-Propagation and Other Differentiation Algorithms . . . . . 204 6.6 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 7 Regularization for Deep Learning 228 7.1 Parameter Norm Penalties . . . . . . . . . . . . . . . . . . . . . . 230 7.2 Norm Penalties as Constrained Optimization . . . . . . . . . . . . 237 7.3 Regularization and Under-Constrained Problems . . . . . . . . . 239 7.4 Dataset Augmentation . . . . . . . . . . . . . . . . . . . . . . . . 240 7.5 Noise Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 7.6 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . 243 7.7 Multi-Task Learning . . . . . . . . . . . . . . . . . . . . . . . . . 244 7.8 Early Stopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 7.9 Parameter Tying and Parameter Sharing . . . . . . . . . . . . . . 253 7.10 Sparse Representations . . . . . . . . . . . . . . . . . . . . . . . . 254 7.11 Bagging and Other Ensemble Methods . . . . . . . . . . . . . . . 256 7.12 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 7.13 Adversarial Training . . . . . . . . . . . . . . . . . . . . . . . . . 268 7.14 Tangent Distance, Tangent Prop, and Manifold Tangent Classifier 270 8 Optimization for Training Deep Models 274 8.1 How Learning Differs from Pure Optimization . . . . . . . . . . . 275 8.2 Challenges in Neural Network Optimization . . . . . . . . . . . . 282 8.3 Basic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 8.4 Parameter Initialization Strategies . . . . . . . . . . . . . . . . . 301 8.5 Algorithms with Adaptive Learning Rates . . . . . . . . . . . . . 306 8.6 Approximate Second-Order Methods . . . . . . . . . . . . . . . . 310 8.7 Optimization Strategies and Meta-Algorithms . . . . . . . . . . . 317 9 Convolutional Networks 330 9.1 The Convolution Operation . . . . . . . . . . . . . . . . . . . . . 331 9.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335 9.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 9.4 Convolution and Pooling as an Infinitely Strong Prior . . . . . . . 345 9.5 Variants of the Basic Convolution Function . . . . . . . . . . . . 347 9.6 Structured Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . 358 9.7 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 9.8 Efficient Convolution Algorithms . . . . . . . . . . . . . . . . . . 362 9.9 Random or Unsupervised Features . . . . . . . . . . . . . . . . . 363 iii
  • 6. CONTENTS 9.10 The Neuroscientific Basis for Convolutional Networks . . . . . . . 364 9.11 Convolutional Networks and the History of Deep Learning . . . . 371 10 Sequence Modeling: Recurrent and Recursive Nets 373 10.1 Unfolding Computational Graphs . . . . . . . . . . . . . . . . . . 375 10.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . 378 10.3 Bidirectional RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 394 10.4 Encoder-Decoder Sequence-to-Sequence Architectures . . . . . . . 396 10.5 Deep Recurrent Networks . . . . . . . . . . . . . . . . . . . . . . 398 10.6 Recursive Neural Networks . . . . . . . . . . . . . . . . . . . . . . 400 10.7 The Challenge of Long-Term Dependencies . . . . . . . . . . . . . 401 10.8 Echo State Networks . . . . . . . . . . . . . . . . . . . . . . . . . 404 10.9 Leaky Units and Other Strategies for Multiple Time Scales . . . . 406 10.10 The Long Short-Term Memory and Other Gated RNNs . . . . . . 408 10.11 Optimization for Long-Term Dependencies . . . . . . . . . . . . . 413 10.12 Explicit Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 416 11 Practical Methodology 421 11.1 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 422 11.2 Default Baseline Models . . . . . . . . . . . . . . . . . . . . . . . 425 11.3 Determining Whether to Gather More Data . . . . . . . . . . . . 426 11.4 Selecting Hyperparameters . . . . . . . . . . . . . . . . . . . . . . 427 11.5 Debugging Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 436 11.6 Example: Multi-Digit Number Recognition . . . . . . . . . . . . . 440 12 Applications 443 12.1 Large-Scale Deep Learning . . . . . . . . . . . . . . . . . . . . . . 443 12.2 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 452 12.3 Speech Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 458 12.4 Natural Language Processing . . . . . . . . . . . . . . . . . . . . 461 12.5 Other Applications . . . . . . . . . . . . . . . . . . . . . . . . . . 478 III Deep Learning Research 486 13 Linear Factor Models 489 13.1 Probabilistic PCA and Factor Analysis . . . . . . . . . . . . . . . 490 13.2 Independent Component Analysis (ICA) . . . . . . . . . . . . . . 491 13.3 Slow Feature Analysis . . . . . . . . . . . . . . . . . . . . . . . . 493 13.4 Sparse Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496 iv
  • 7. CONTENTS 13.5 Manifold Interpretation of PCA . . . . . . . . . . . . . . . . . . . 499 14 Autoencoders 502 14.1 Undercomplete Autoencoders . . . . . . . . . . . . . . . . . . . . 503 14.2 Regularized Autoencoders . . . . . . . . . . . . . . . . . . . . . . 504 14.3 Representational Power, Layer Size and Depth . . . . . . . . . . . 508 14.4 Stochastic Encoders and Decoders . . . . . . . . . . . . . . . . . . 509 14.5 Denoising Autoencoders . . . . . . . . . . . . . . . . . . . . . . . 510 14.6 Learning Manifolds with Autoencoders . . . . . . . . . . . . . . . 515 14.7 Contractive Autoencoders . . . . . . . . . . . . . . . . . . . . . . 521 14.8 Predictive Sparse Decomposition . . . . . . . . . . . . . . . . . . 523 14.9 Applications of Autoencoders . . . . . . . . . . . . . . . . . . . . 524 15 Representation Learning 526 15.1 Greedy Layer-Wise Unsupervised Pretraining . . . . . . . . . . . 528 15.2 Transfer Learning and Domain Adaptation . . . . . . . . . . . . . 536 15.3 Semi-Supervised Disentangling of Causal Factors . . . . . . . . . 541 15.4 Distributed Representation . . . . . . . . . . . . . . . . . . . . . . 546 15.5 Exponential Gains from Depth . . . . . . . . . . . . . . . . . . . 553 15.6 Providing Clues to Discover Underlying Causes . . . . . . . . . . 554 16 Structured Probabilistic Models for Deep Learning 558 16.1 The Challenge of Unstructured Modeling . . . . . . . . . . . . . . 559 16.2 Using Graphs to Describe Model Structure . . . . . . . . . . . . . 563 16.3 Sampling from Graphical Models . . . . . . . . . . . . . . . . . . 580 16.4 Advantages of Structured Modeling . . . . . . . . . . . . . . . . . 582 16.5 Learning about Dependencies . . . . . . . . . . . . . . . . . . . . 582 16.6 Inference and Approximate Inference . . . . . . . . . . . . . . . . 584 16.7 The Deep Learning Approach to Structured Probabilistic Models 585 17 Monte Carlo Methods 590 17.1 Sampling and Monte Carlo Methods . . . . . . . . . . . . . . . . 590 17.2 Importance Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 592 17.3 Markov Chain Monte Carlo Methods . . . . . . . . . . . . . . . . 595 17.4 Gibbs Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599 17.5 The Challenge of Mixing between Separated Modes . . . . . . . . 599 18 Confronting the Partition Function 605 18.1 The Log-Likelihood Gradient . . . . . . . . . . . . . . . . . . . . 606 18.2 Stochastic Maximum Likelihood and Contrastive Divergence . . . 607 v
  • 8. CONTENTS 18.3 Pseudolikelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . 615 18.4 Score Matching and Ratio Matching . . . . . . . . . . . . . . . . 617 18.5 Denoising Score Matching . . . . . . . . . . . . . . . . . . . . . . 619 18.6 Noise-Contrastive Estimation . . . . . . . . . . . . . . . . . . . . 620 18.7 Estimating the Partition Function . . . . . . . . . . . . . . . . . . 623 19 Approximate Inference 631 19.1 Inference as Optimization . . . . . . . . . . . . . . . . . . . . . . 633 19.2 Expectation Maximization . . . . . . . . . . . . . . . . . . . . . . 634 19.3 MAP Inference and Sparse Coding . . . . . . . . . . . . . . . . . 635 19.4 Variational Inference and Learning . . . . . . . . . . . . . . . . . 638 19.5 Learned Approximate Inference . . . . . . . . . . . . . . . . . . . 651 20 Deep Generative Models 654 20.1 Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . . . . 654 20.2 Restricted Boltzmann Machines . . . . . . . . . . . . . . . . . . . 656 20.3 Deep Belief Networks . . . . . . . . . . . . . . . . . . . . . . . . . 660 20.4 Deep Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . 663 20.5 Boltzmann Machines for Real-Valued Data . . . . . . . . . . . . . 676 20.6 Convolutional Boltzmann Machines . . . . . . . . . . . . . . . . . 683 20.7 Boltzmann Machines for Structured or Sequential Outputs . . . . 685 20.8 Other Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . 686 20.9 Back-Propagation through Random Operations . . . . . . . . . . 687 20.10 Directed Generative Nets . . . . . . . . . . . . . . . . . . . . . . . 692 20.11 Drawing Samples from Autoencoders . . . . . . . . . . . . . . . . 711 20.12 Generative Stochastic Networks . . . . . . . . . . . . . . . . . . . 714 20.13 Other Generation Schemes . . . . . . . . . . . . . . . . . . . . . . 716 20.14 Evaluating Generative Models . . . . . . . . . . . . . . . . . . . . 717 20.15 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720 Bibliography 721 Index 777 vi
  • 9. Website www.deeplearningbook.org This book is accompanied by the above website. The website provides a variety of supplementary material, including exercises, lecture slides, corrections of mistakes, and other resources that should be useful to both readers and instructors. vii
  • 10. Acknowledgments This book would not have been possible without the contributions of many people. We would like to thank those who commented on our proposal for the book and helped plan its contents and organization: Guillaume Alain, Kyunghyun Cho, Çağlar Gülçehre, David Krueger, Hugo Larochelle, Razvan Pascanu and Thomas Rohée. We would like to thank the people who offered feedback on the content of the book itself. Some offered feedback on many chapters: Martín Abadi, Guillaume Alain, Ion Androutsopoulos, Fred Bertsch, Olexa Bilaniuk, Ufuk Can Biçici, Matko Bošnjak, John Boersma, Greg Brockman, Alexandre de Brébisson, Pierre Luc Carrier, Sarath Chandar, Pawel Chilinski, Mark Daoust, Oleg Dashevskii, Laurent Dinh, Stephan Dreseitl, Jim Fan, Miao Fan, Meire Fortunato, Frédéric Francis, Nando de Freitas, Çağlar Gülçehre, Jurgen Van Gael, Javier Alonso García, Jonathan Hunt, Gopi Jeyaram, Chingiz Kabytayev, Lukasz Kaiser, Varun Kanade, Asifullah Khan, Akiel Khan, John King, Diederik P. Kingma, Yann LeCun, Rudolf Mathey, Matías Mattamala, Abhinav Maurya, Kevin Murphy, Oleg Mürk, Roman Novak, Augustus Q. Odena, Simon Pavlik, Karl Pichotta, Eddie Pierce, Kari Pulli, Roussel Rahman, Tapani Raiko, Anurag Ranjan, Johannes Roith, Mihaela Rosca, Halis Sak, César Salgado, Grigory Sapunov, Yoshinori Sasaki, Mike Schuster, Julian Serban, Nir Shabat, Ken Shirriff, Andre Simpelo, Scott Stanley, David Sussillo, Ilya Sutskever, Carles Gelada Sáez, Graham Taylor, Valentin Tolmer, Massimiliano Tomassoli, An Tran, Shubhendu Trivedi, Alexey Umnov, Vincent Vanhoucke, Marco Visentini-Scarzanella, Martin Vita, David Warde-Farley, Dustin Webb, Kelvin Xu, Wei Xue, Ke Yang, Li Yao, Zygmunt Zając and Ozan Çağlayan. We would also like to thank those who provided us with useful feedback on individual chapters: • Notation: Zhang Yuanhang. • Chapter , : Yusuf Akgul, Sebastien Bratieres, Samira Ebrahimi, 1 Introduction viii
  • 11. CONTENTS Charlie Gorichanaz, Brendan Loudermilk, Eric Morris, Cosmin Pârvulescu and Alfredo Solano. • Chapter , : Amjad Almahairi, Nikola Banić, Kevin Bennett, 2 Linear Algebra Philippe Castonguay, Oscar Chang, Eric Fosler-Lussier, Andrey Khalyavin, Sergey Oreshkov, István Petrás, Dennis Prangle, Thomas Rohée, Gitanjali Gulve Sehgal, Colby Toland, Alessandro Vitale and Bob Welland. • Chapter , : John Philip Anderson, Kai 3 Probability and Information Theory Arulkumaran, Vincent Dumoulin, Rui Fa, Stephan Gouws, Artem Oboturov, Antti Rasmus, Alexey Surkov and Volker Tresp. • Chapter , : Tran Lam AnIan Fischer and Hu 4 Numerical Computation Yuhuang. • Chapter , : Dzmitry Bahdanau, Justin Domingue, 5 Machine Learning Basics Nikhil Garg, Makoto Otsuka, Bob Pepin, Philip Popien, Emmanuel Rayner, Peter Shepard, Kee-Bong Song, Zheng Sun and Andy Wu. • Chapter , 6 Deep Feedforward Networks: Uriel Berdugo, Fabrizio Bottarel, Elizabeth Burl, Ishan Durugkar, Jeff Hlywa, Jong Wook Kim, David Krueger and Aditya Kumar Praharaj. • Chapter , : Morten Kolbæk, Kshitij Lauria, 7 Regularization for Deep Learning Inkyu Lee, Sunil Mohan, Hai Phong Phan and Joshua Salisbury. • Chapter , 8 Optimization for Training Deep Models: Marcel Ackermann, Peter Armitage, Rowel Atienza, Andrew Brock, Tegan Maharaj, James Martens, Kashif Rasul, Klaus Strobl and Nicholas Turner. • Chapter , 9 Convolutional Networks: Martín Arjovsky, Eugene Brevdo, Kon- stantin Divilov, Eric Jensen, Mehdi Mirza, Alex Paino, Marjorie Sayer, Ryan Stout and Wentao Wu. • Chapter , 10 Sequence Modeling: Recurrent and Recursive Nets: Gökçen Eraslan, Steven Hickson, Razvan Pascanu, Lorenzo von Ritter, Rui Rodrigues, Dmitriy Serdyuk, Dongyu Shi and Kaiyu Yang. • Chapter , : Daniel Beckstein. 11 Practical Methodology • Chapter , : George Dahl, Vladimir Nekrasov and Ribana 12 Applications Roscher. • Chapter , 13 Linear Factor Models: Jayanth Koushik. ix
  • 12. CONTENTS • Chapter , : Kunal Ghosh. 15 Representation Learning • Chapter , : Minh Lê 16 Structured Probabilistic Models for Deep Learning and Anton Varfolom. • Chapter , 18 Confronting the Partition Function: Sam Bowman. • Chapter , : Yujia Bao. 19 Approximate Inference • Chapter , 20 Deep Generative Models: Nicolas Chapados, Daniel Galvez, Wenming Ma, Fady Medhat, Shakir Mohamed and Grégoire Montavon. • Bibliography: Lukas Michelbacher and Leslie N. Smith. We also want to thank those who allowed us to reproduce images, figures or data from their publications. We indicate their contributions in the figure captions throughout the text. We would like to thank Lu Wang for writing pdf2htmlEX, which we used to make the web version of the book, and for offering support to improve the quality of the resulting HTML. We would like to thank Ian’s wife Daniela Flori Goodfellow for patiently supporting Ian during the writing of the book as well as for help with proofreading. We would like to thank the Google Brain team for providing an intellectual environment where Ian could devote a tremendous amount of time to writing this book and receive feedback and guidance from colleagues. We would especially like to thank Ian’s former manager, Greg Corrado, and his current manager, Samy Bengio, for their support of this project. Finally, we would like to thank Geoffrey Hinton for encouragement when writing was difficult. x
  • 13. Notation This section provides a concise reference describing the notation used throughout this book. If you are unfamiliar with any of the corresponding mathematical concepts, we describe most of these ideas in chapters 2–4. Numbers and Arrays a A scalar (integer or real) a A vector A A matrix A A tensor In Identity matrix with rows and columns n n I Identity matrix with dimensionality implied by context e( ) i Standard basis vector [0, . . . , 0, 1,0, . . . ,0] with a 1 at position i diag( ) a A square, diagonal matrix with diagonal entries given by a a A scalar random variable a A vector-valued random variable A A matrix-valued random variable xi
  • 14. CONTENTS Sets and Graphs A A set R The set of real numbers { } 0 1 , The set containing 0 and 1 { } 0 1 , , . . . , n The set of all integers between and 0 n [ ] a, b The real interval including and a b ( ] a, b The real interval excluding but including a b A B Set subtraction, i.e., the set containing the ele- ments of that are not in A B G A graph PaG(xi) The parents of xi in G Indexing ai Element i of vector a, with indexing starting at 1 a−i All elements of vector except for element a i Ai,j Element of matrix i, j A Ai,: Row of matrix i A A:,i Column of matrix i A Ai,j,k Element of a 3-D tensor ( ) i, j, k A A: : , ,i 2-D slice of a 3-D tensor ai Element of the random vector i a Linear Algebra Operations A Transpose of matrix A A+ Moore-Penrose pseudoinverse of A A B  Element-wise (Hadamard) product of and A B det( ) A Determinant of A xii
  • 15. CONTENTS Calculus dy dx Derivative of with respect to y x ∂y ∂x Partial derivative of with respect to y x ∇xy Gradient of with respect to y x ∇X y Matrix derivatives of with respect to y X ∇Xy Tensor containing derivatives of y with respect to X ∂f ∂x Jacobian matrix J ∈ Rm n × of f : Rn → Rm ∇2 xf f f ( ) ( x or H )( ) x The Hessian matrix of at input point x  f d ( ) x x Definite integral over the entire domain of x  S f d ( ) x x x Definite integral with respect to over the set S Probability and Information Theory a b The random variables a and b are independent ⊥ a b c They are conditionally independent given c ⊥ | P( ) a A probability distribution over a discrete variable p( ) a A probability distribution over a continuous vari- able, or over a variable whose type has not been specified a Random variable a has distribution ∼ P P Ex∼P[ ( )] ( ) ( ) ( ) f x or Ef x Expectation of f x with respect to P x Var( ( )) f x Variance of under x f x ( ) P( ) Cov( ( ) ( )) f x , g x Covariance of and under x f x ( ) g x ( ) P( ) H( ) x Shannon entropy of the random variable x DKL( ) P Q  Kullback-Leibler divergence of P and Q N( ; ) x µ, Σ Gaussian distribution over x with mean µ and covariance Σ xiii
  • 16. CONTENTS Functions f f : A B → The function with domain and range A B f g f g ◦ Composition of the functions and f( ; ) x θ A function of x parametrized by θ. (Sometimes we write f(x) and omit the argument θ to lighten notation) log x x Natural logarithm of σ x ( ) Logistic sigmoid, 1 1 + exp( ) −x ζ x x ( ) log(1 + exp( Softplus, )) || || x p Lp norm of x || || x L2 norm of x x+ Positive part of , i.e., x max(0 ) , x 1condition is 1 if the condition is true, 0 otherwise Sometimes we use a function f whose argument is a scalar but apply it to a vector, matrix, or tensor: f(x), f(X), or f(X). This denotes the application of f to the array element-wise. For example, if C = σ(X), then Ci,j,k = σ(Xi,j,k) for all valid values of , and . i j k Datasets and Distributions pdata The data generating distribution p̂data The empirical distribution defined by the training set X A set of training examples x( ) i The -th example (input) from a dataset i y( ) i or y( ) i The target associated with x( ) i for supervised learn- ing X The m n × matrix with input example x( ) i in row Xi,: xiv
  • 17. Chapter 1 Introduction Inventors have long dreamed of creating machines that think. This desire dates back to at least the time of ancient Greece. The mythical figures Pygmalion, Daedalus, and Hephaestus may all be interpreted as legendary inventors, and Galatea, Talos, and Pandora may all be regarded as artificial life ( , Ovid and Martin 2004 Sparkes 1996 Tandy 1997 ; , ; , ). When programmable computers were first conceived, people wondered whether such machines might become intelligent, over a hundred years before one was built (Lovelace 1842 , ). Today, artificial intelligence (AI) is a thriving field with many practical applications and active research topics. We look to intelligent software to automate routine labor, understand speech or images, make diagnoses in medicine and support basic scientific research. In the early days of artificial intelligence, the field rapidly tackled and solved problems that are intellectually difficult for human beings but relatively straight- forward for computers—problems that can be described by a list of formal, math- ematical rules. The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally—problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images. This book is about a solution to these more intuitive problems. This solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these 1
  • 18. CHAPTER 1. INTRODUCTION concepts are built on top of each other, the graph is deep, with many layers. For this reason, we call this approach to AI . deep learning Many of the early successes of AI took place in relatively sterile and formal environments and did not require computers to have much knowledge about the world. For example, IBM’s Deep Blue chess-playing system defeated world champion Garry Kasparov in 1997 ( , ). Chess is of course a very simple Hsu 2002 world, containing only sixty-four locations and thirty-two pieces that can move in only rigidly circumscribed ways. Devising a successful chess strategy is a tremendous accomplishment, but the challenge is not due to the difficulty of describing the set of chess pieces and allowable moves to the computer. Chess can be completely described by a very brief list of completely formal rules, easily provided ahead of time by the programmer. Ironically, abstract and formal tasks that are among the most difficult mental undertakings for a human being are among the easiest for a computer. Computers have long been able to defeat even the best human chess player, but are only recently matching some of the abilities of average human beings to recognize objects or speech. A person’s everyday life requires an immense amount of knowledge about the world. Much of this knowledge is subjective and intuitive, and therefore difficult to articulate in a formal way. Computers need to capture this same knowledge in order to behave in an intelligent way. One of the key challenges in artificial intelligence is how to get this informal knowledge into a computer. Several artificial intelligence projects have sought to hard-code knowledge about the world in formal languages. A computer can reason about statements in these formal languages automatically using logical inference rules. This is known as the knowledge base approach to artificial intelligence. None of these projects has led to a major success. One of the most famous such projects is Cyc ( , Lenat and Guha 1989). Cyc is an inference engine and a database of statements in a language called CycL. These statements are entered by a staff of human supervisors. It is an unwieldy process. People struggle to devise formal rules with enough complexity to accurately describe the world. For example, Cyc failed to understand a story about a person named Fred shaving in the morning ( , ). Its inference Linde 1992 engine detected an inconsistency in the story: it knew that people do not have electrical parts, but because Fred was holding an electric razor, it believed the entity “FredWhileShaving” contained electrical parts. It therefore asked whether Fred was still a person while he was shaving. The difficulties faced by systems relying on hard-coded knowledge suggest that AI systems need the ability to acquire their own knowledge, by extracting patterns from raw data. This capability is known as machine learning. The 2
  • 19. CHAPTER 1. INTRODUCTION introduction of machine learning allowed computers to tackle problems involving knowledge of the real world and make decisions that appear subjective. A simple machine learning algorithm called logistic regression can determine whether to recommend cesarean delivery (Mor-Yosef 1990 et al., ). A simple machine learning algorithm called naive Bayes can separate legitimate e-mail from spam e-mail. The performance of these simple machine learning algorithms depends heavily on the representation of the data they are given. For example, when logistic regression is used to recommend cesarean delivery, the AI system does not examine the patient directly. Instead, the doctor tells the system several pieces of relevant information, such as the presence or absence of a uterine scar. Each piece of information included in the representation of the patient is known as a feature. Logistic regression learns how each of these features of the patient correlates with various outcomes. However, it cannot influence the way that the features are defined in any way. If logistic regression was given an MRI scan of the patient, rather than the doctor’s formalized report, it would not be able to make useful predictions. Individual pixels in an MRI scan have negligible correlation with any complications that might occur during delivery. This dependence on representations is a general phenomenon that appears throughout computer science and even daily life. In computer science, opera- tions such as searching a collection of data can proceed exponentially faster if the collection is structured and indexed intelligently. People can easily perform arithmetic on Arabic numerals, but find arithmetic on Roman numerals much more time-consuming. It is not surprising that the choice of representation has an enormous effect on the performance of machine learning algorithms. For a simple visual example, see figure . 1.1 Many artificial intelligence tasks can be solved by designing the right set of features to extract for that task, then providing these features to a simple machine learning algorithm. For example, a useful feature for speaker identification from sound is an estimate of the size of speaker’s vocal tract. It therefore gives a strong clue as to whether the speaker is a man, woman, or child. However, for many tasks, it is difficult to know what features should be extracted. For example, suppose that we would like to write a program to detect cars in photographs. We know that cars have wheels, so we might like to use the presence of a wheel as a feature. Unfortunately, it is difficult to describe exactly what a wheel looks like in terms of pixel values. A wheel has a simple geometric shape but its image may be complicated by shadows falling on the wheel, the sun glaring off the metal parts of the wheel, the fender of the car or an object in the foreground obscuring part of the wheel, and so on. 3
  • 20. CHAPTER 1. INTRODUCTION       Figure 1.1: Example of different representations: suppose we want to separate two categories of data by drawing a line between them in a scatterplot. In the plot on the left, we represent some data using Cartesian coordinates, and the task is impossible. In the plot on the right, we represent the data with polar coordinates and the task becomes simple to solve with a vertical line. Figure produced in collaboration with David Warde-Farley. One solution to this problem is to use machine learning to discover not only the mapping from representation to output but also the representation itself. This approach is known as representation learning. Learned representations often result in much better performance than can be obtained with hand-designed representations. They also allow AI systems to rapidly adapt to new tasks, with minimal human intervention. A representation learning algorithm can discover a good set of features for a simple task in minutes, or a complex task in hours to months. Manually designing features for a complex task requires a great deal of human time and effort; it can take decades for an entire community of researchers. The quintessential example of a representation learning algorithm is the au- toencoder. An autoencoder is the combination of an encoder function that converts the input data into a different representation, and a decoder function that converts the new representation back into the original format. Autoencoders are trained to preserve as much information as possible when an input is run through the encoder and then the decoder, but are also trained to make the new representation have various nice properties. Different kinds of autoencoders aim to achieve different kinds of properties. When designing features or algorithms for learning features, our goal is usually to separate the factors of variation that explain the observed data. In this context, we use the word “factors” simply to refer to separate sources of influence; the factors are usually not combined by multiplication. Such factors are often not 4
  • 21. CHAPTER 1. INTRODUCTION quantities that are directly observed. Instead, they may exist either as unobserved objects or unobserved forces in the physical world that affect observable quantities. They may also exist as constructs in the human mind that provide useful simplifying explanations or inferred causes of the observed data. They can be thought of as concepts or abstractions that help us make sense of the rich variability in the data. When analyzing a speech recording, the factors of variation include the speaker’s age, their sex, their accent and the words that they are speaking. When analyzing an image of a car, the factors of variation include the position of the car, its color, and the angle and brightness of the sun. A major source of difficulty in many real-world artificial intelligence applications is that many of the factors of variation influence every single piece of data we are able to observe. The individual pixels in an image of a red car might be very close to black at night. The shape of the car’s silhouette depends on the viewing angle. Most applications require us to the factors of variation and discard the disentangle ones that we do not care about. Of course, it can be very difficult to extract such high-level, abstract features from raw data. Many of these factors of variation, such as a speaker’s accent, can be identified only using sophisticated, nearly human-level understanding of the data. When it is nearly as difficult to obtain a representation as to solve the original problem, representation learning does not, at first glance, seem to help us. Deep learning solves this central problem in representation learning by intro- ducing representations that are expressed in terms of other, simpler representations. Deep learning allows the computer to build complex concepts out of simpler con- cepts. Figure shows how a deep learning system can represent the concept of 1.2 an image of a person by combining simpler concepts, such as corners and contours, which are in turn defined in terms of edges. The quintessential example of a deep learning model is the feedforward deep network or multilayer perceptron (MLP). A multilayer perceptron is just a mathematical function mapping some set of input values to output values. The function is formed by composing many simpler functions. We can think of each application of a different mathematical function as providing a new representation of the input. The idea of learning the right representation for the data provides one perspec- tive on deep learning. Another perspective on deep learning is that depth allows the computer to learn a multi-step computer program. Each layer of the representation can be thought of as the state of the computer’s memory after executing another set of instructions in parallel. Networks with greater depth can execute more instructions in sequence. Sequential instructions offer great power because later 5
  • 22. CHAPTER 1. INTRODUCTION Visible layer (input pixels) 1st hidden layer (edges) 2nd hidden layer (corners and contours) 3rd hidden layer (object parts) CAR PERSON ANIMAL Output (object identity) Figure 1.2: Illustration of a deep learning model. It is difficult for a computer to understand the meaning of raw sensory input data, such as this image represented as a collection of pixel values. The function mapping from a set of pixels to an object identity is very complicated. Learning or evaluating this mapping seems insurmountable if tackled directly. Deep learning resolves this difficulty by breaking the desired complicated mapping into a series of nested simple mappings, each described by a different layer of the model. The input is presented at the visible layer, so named because it contains the variables that we are able to observe. Then a series of hidden layers extracts increasingly abstract features from the image. These layers are called “hidden” because their values are not given in the data; instead the model must determine which concepts are useful for explaining the relationships in the observed data. The images here are visualizations of the kind of feature represented by each hidden unit. Given the pixels, the first layer can easily identify edges, by comparing the brightness of neighboring pixels. Given the first hidden layer’s description of the edges, the second hidden layer can easily search for corners and extended contours, which are recognizable as collections of edges. Given the second hidden layer’s description of the image in terms of corners and contours, the third hidden layer can detect entire parts of specific objects, by finding specific collections of contours and corners. Finally, this description of the image in terms of the object parts it contains can be used to recognize the objects present in the image. Images reproduced with permission from Zeiler and Fergus 2014 ( ). 6
  • 23. CHAPTER 1. INTRODUCTION x1 x1 σ w1 w1 × x2 x2 w2 w2 × + Element Set + × σ x x w w Element Set Logistic Regression Logistic Regression Figure 1.3: Illustration of computational graphs mapping an input to an output where each node performs an operation. Depth is the length of the longest path from input to output but depends on the definition of what constitutes a possible computational step. The computation depicted in these graphs is the output of a logistic regression model, σ(wT x), where σ is the logistic sigmoid function. If we use addition, multiplication and logistic sigmoids as the elements of our computer language, then this model has depth three. If we view logistic regression as an element itself, then this model has depth one. instructions can refer back to the results of earlier instructions. According to this view of deep learning, not all of the information in a layer’s activations necessarily encodes factors of variation that explain the input. The representation also stores state information that helps to execute a program that can make sense of the input. This state information could be analogous to a counter or pointer in a traditional computer program. It has nothing to do with the content of the input specifically, but it helps the model to organize its processing. There are two main ways of measuring the depth of a model. The first view is based on the number of sequential instructions that must be executed to evaluate the architecture. We can think of this as the length of the longest path through a flow chart that describes how to compute each of the model’s outputs given its inputs. Just as two equivalent computer programs will have different lengths depending on which language the program is written in, the same function may be drawn as a flowchart with different depths depending on which functions we allow to be used as individual steps in the flowchart. Figure illustrates how this 1.3 choice of language can give two different measurements for the same architecture. Another approach, used by deep probabilistic models, regards the depth of a model as being not the depth of the computational graph but the depth of the graph describing how concepts are related to each other. In this case, the depth 7
  • 24. CHAPTER 1. INTRODUCTION of the flowchart of the computations needed to compute the representation of each concept may be much deeper than the graph of the concepts themselves. This is because the system’s understanding of the simpler concepts can be refined given information about the more complex concepts. For example, an AI system observing an image of a face with one eye in shadow may initially only see one eye. After detecting that a face is present, it can then infer that a second eye is probably present as well. In this case, the graph of concepts only includes two layers—a layer for eyes and a layer for faces—but the graph of computations includes 2n layers if we refine our estimate of each concept given the other times. n Because it is not always clear which of these two views—the depth of the computational graph, or the depth of the probabilistic modeling graph—is most relevant, and because different people choose different sets of smallest elements from which to construct their graphs, there is no single correct value for the depth of an architecture, just as there is no single correct value for the length of a computer program. Nor is there a consensus about how much depth a model requires to qualify as “deep.” However, deep learning can safely be regarded as the study of models that either involve a greater amount of composition of learned functions or learned concepts than traditional machine learning does. To summarize, deep learning, the subject of this book, is an approach to AI. Specifically, it is a type of machine learning, a technique that allows computer systems to improve with experience and data. According to the authors of this book, machine learning is the only viable approach to building AI systems that can operate in complicated, real-world environments. Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as a nested hierarchy of concepts, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones. Figure illustrates the relationship between these different 1.4 AI disciplines. Figure gives a high-level schematic of how each works. 1.5 1.1 Who Should Read This Book? This book can be useful for a variety of readers, but we wrote it with two main target audiences in mind. One of these target audiences is university students (undergraduate or graduate) learning about machine learning, including those who are beginning a career in deep learning and artificial intelligence research. The other target audience is software engineers who do not have a machine learning or statistics background, but want to rapidly acquire one and begin using deep learning in their product or platform. Deep learning has already proven useful in 8
  • 25. CHAPTER 1. INTRODUCTION AI Machine learning Representation learning Deep learning Example: Knowledge bases Example: Logistic regression Example: Shallow autoencoders Example: MLPs Figure 1.4: A Venn diagram showing how deep learning is a kind of representation learning, which is in turn a kind of machine learning, which is used for many but not all approaches to AI. Each section of the Venn diagram includes an example of an AI technology. 9
  • 26. CHAPTER 1. INTRODUCTION Input Hand- designed program Output Input Hand- designed features Mapping from features Output Input Features Mapping from features Output Input Simple features Mapping from features Output Additional layers of more abstract features Rule-based systems Classic machine learning Representation learning Deep learning Figure 1.5: Flowcharts showing how the different parts of an AI system relate to each other within different AI disciplines. Shaded boxes indicate components that are able to learn from data. 10
  • 27. CHAPTER 1. INTRODUCTION many software disciplines including computer vision, speech and audio processing, natural language processing, robotics, bioinformatics and chemistry, video games, search engines, online advertising and finance. This book has been organized into three parts in order to best accommodate a variety of readers. Part introduces basic mathematical tools and machine learning I concepts. Part describes the most established deep learning algorithms that are II essentially solved technologies. Part describes more speculative ideas that are III widely believed to be important for future research in deep learning. Readers should feel free to skip parts that are not relevant given their interests or background. Readers familiar with linear algebra, probability, and fundamental machine learning concepts can skip part , for example, while readers who just want I to implement a working system need not read beyond part . To help choose which II chapters to read, figure provides a flowchart showing the high-level organization 1.6 of the book. We do assume that all readers come from a computer science background. We assume familiarity with programming, a basic understanding of computational performance issues, complexity theory, introductory level calculus and some of the terminology of graph theory. 1.2 Historical Trends in Deep Learning It is easiest to understand deep learning with some historical context. Rather than providing a detailed history of deep learning, we identify a few key trends: • Deep learning has had a long and rich history, but has gone by many names reflecting different philosophical viewpoints, and has waxed and waned in popularity. • Deep learning has become more useful as the amount of available training data has increased. • Deep learning models have grown in size over time as computer infrastructure (both hardware and software) for deep learning has improved. • Deep learning has solved increasingly complicated applications with increasing accuracy over time. 11
  • 28. CHAPTER 1. INTRODUCTION 1. Introduction Part I: Applied Math and Machine Learning Basics 2. Linear Algebra 3. Probability and Information Theory 4. Numerical Computation 5. Machine Learning Basics Part II: Deep Networks: Modern Practices 6. Deep Feedforward Networks 7. Regularization 8. Optimization 9. CNNs 10. RNNs 11. Practical Methodology 12. Applications Part III: Deep Learning Research 13. Linear Factor Models 14. Autoencoders 15. Representation Learning 16. Structured Probabilistic Models 17. Monte Carlo Methods 18. Partition Function 19. Inference 20. Deep Generative Models Figure 1.6: The high-level organization of the book. An arrow from one chapter to another indicates that the former chapter is prerequisite material for understanding the latter. 12
  • 29. CHAPTER 1. INTRODUCTION 1.2.1 The Many Names and Changing Fortunes of Neural Net- works We expect that many readers of this book have heard of deep learning as an exciting new technology, and are surprised to see a mention of “history” in a book about an emerging field. In fact, deep learning dates back to the 1940s. Deep learning only appears to be new, because it was relatively unpopular for several years preceding its current popularity, and because it has gone through many different names, and has only recently become called “deep learning.” The field has been rebranded many times, reflecting the influence of different researchers and different perspectives. A comprehensive history of deep learning is beyond the scope of this textbook. However, some basic context is useful for understanding deep learning. Broadly speaking, there have been three waves of development of deep learning: deep learning known as cybernetics in the 1940s–1960s, deep learning known as connectionism in the 1980s–1990s, and the current resurgence under the name deep learning beginning in 2006. This is quantitatively illustrated in figure . 1.7 Some of the earliest learning algorithms we recognize today were intended to be computational models of biological learning, i.e. models of how learning happens or could happen in the brain. As a result, one of the names that deep learning has gone by is artificial neural networks (ANNs). The corresponding perspective on deep learning models is that they are engineered systems inspired by the biological brain (whether the human brain or the brain of another animal). While the kinds of neural networks used for machine learning have sometimes been used to understand brain function ( , ), they are Hinton and Shallice 1991 generally not designed to be realistic models of biological function. The neural perspective on deep learning is motivated by two main ideas. One idea is that the brain provides a proof by example that intelligent behavior is possible, and a conceptually straightforward path to building intelligence is to reverse engineer the computational principles behind the brain and duplicate its functionality. Another perspective is that it would be deeply interesting to understand the brain and the principles that underlie human intelligence, so machine learning models that shed light on these basic scientific questions are useful apart from their ability to solve engineering applications. The modern term “deep learning” goes beyond the neuroscientific perspective on the current breed of machine learning models. It appeals to a more general principle of learning multiple levels of composition, which can be applied in machine learning frameworks that are not necessarily neurally inspired. 13
  • 30. CHAPTER 1. INTRODUCTION 1940 1950 1960 1970 1980 1990 2000 Year 0.000000 0.000050 0.000100 0.000150 0.000200 0.000250 Frequency of Word or Phrase cybernetics (connectionism + neural networks) Figure 1.7: The figure shows two of the three historical waves of artificial neural nets research, as measured by the frequency of the phrases “cybernetics” and “connectionism” or “neural networks” according to Google Books (the third wave is too recent to appear). The first wave started with cybernetics in the 1940s–1960s, with the development of theories of biological learning ( , ; , ) and implementations of McCulloch and Pitts 1943 Hebb 1949 the first models such as the perceptron (Rosenblatt 1958 , ) allowing the training of a single neuron. The second wave started with the connectionist approach of the 1980–1995 period, with back-propagation ( , ) to train a neural network with one or two Rumelhart et al. 1986a hidden layers. The current and third wave, deep learning, started around 2006 (Hinton et al. et al. et al. , ; 2006 Bengio , ; 2007 Ranzato , ), and is just now appearing in book 2007a form as of 2016. The other two waves similarly appeared in book form much later than the corresponding scientific activity occurred. 14
  • 31. CHAPTER 1. INTRODUCTION The earliest predecessors of modern deep learning were simple linear models motivated from a neuroscientific perspective. These models were designed to take a set of n input values x1, . . . , xn and associate them with an output y. These models would learn a set of weights w1, . . . , wn and compute their output f(x w , ) = x1w1 + · · · + xnwn . This first wave of neural networks research was known as cybernetics, as illustrated in figure . 1.7 The McCulloch-Pitts Neuron ( , ) was an early model McCulloch and Pitts 1943 of brain function. This linear model could recognize two different categories of inputs by testing whether f (x w , ) is positive or negative. Of course, for the model to correspond to the desired definition of the categories, the weights needed to be set correctly. These weights could be set by the human operator. In the 1950s, the perceptron (Rosenblatt 1958 1962 , , ) became the first model that could learn the weights defining the categories given examples of inputs from each category. The adaptive linear element (ADALINE), which dates from about the same time, simply returned the value of f(x) itself to predict a real number (Widrow and Hoff 1960 , ), and could also learn to predict these numbers from data. These simple learning algorithms greatly affected the modern landscape of ma- chine learning. The training algorithm used to adapt the weights of the ADALINE was a special case of an algorithm called stochastic gradient descent. Slightly modified versions of the stochastic gradient descent algorithm remain the dominant training algorithms for deep learning models today. Models based on the f(x w , ) used by the perceptron and ADALINE are called linear models. These models remain some of the most widely used machine learning models, though in many cases they are trained in different ways than the original models were trained. Linear models have many limitations. Most famously, they cannot learn the XOR function, where f ([0,1], w) = 1 and f([1,0], w) = 1 but f([1, 1], w) = 0 and f([0, 0], w) = 0. Critics who observed these flaws in linear models caused a backlash against biologically inspired learning in general (Minsky and Papert, 1969). This was the first major dip in the popularity of neural networks. Today, neuroscience is regarded as an important source of inspiration for deep learning researchers, but it is no longer the predominant guide for the field. The main reason for the diminished role of neuroscience in deep learning research today is that we simply do not have enough information about the brain to use it as a guide. To obtain a deep understanding of the actual algorithms used by the brain, we would need to be able to monitor the activity of (at the very least) thousands of interconnected neurons simultaneously. Because we are not able to do this, we are far from understanding even some of the most simple and 15
  • 32. CHAPTER 1. INTRODUCTION well-studied parts of the brain ( , ). Olshausen and Field 2005 Neuroscience has given us a reason to hope that a single deep learning algorithm can solve many different tasks. Neuroscientists have found that ferrets can learn to “see” with the auditory processing region of their brain if their brains are rewired to send visual signals to that area (Von Melchner 2000 et al., ). This suggests that much of the mammalian brain might use a single algorithm to solve most of the different tasks that the brain solves. Before this hypothesis, machine learning research was more fragmented, with different communities of researchers studying natural language processing, vision, motion planning and speech recognition. Today, these application communities are still separate, but it is common for deep learning research groups to study many or even all of these application areas simultaneously. We are able to draw some rough guidelines from neuroscience. The basic idea of having many computational units that become intelligent only via their interactions with each other is inspired by the brain. The Neocognitron (Fukushima 1980 , ) introduced a powerful model architecture for processing images that was inspired by the structure of the mammalian visual system and later became the basis for the modern convolutional network ( , ), as we will see in LeCun et al. 1998b section . Most neural networks today are based on a model neuron called 9.10 the rectified linear unit. The original Cognitron (Fukushima 1975 , ) introduced a more complicated version that was highly inspired by our knowledge of brain function. The simplified modern version was developed incorporating ideas from many viewpoints, with ( ) and ( ) citing Nair and Hinton 2010 Glorot et al. 2011a neuroscience as an influence, and ( ) citing more engineering- Jarrett et al. 2009 oriented influences. While neuroscience is an important source of inspiration, it need not be taken as a rigid guide. We know that actual neurons compute very different functions than modern rectified linear units, but greater neural realism has not yet led to an improvement in machine learning performance. Also, while neuroscience has successfully inspired several neural network architectures, we do not yet know enough about biological learning for neuroscience to offer much guidance for the learning algorithms we use to train these architectures. Media accounts often emphasize the similarity of deep learning to the brain. While it is true that deep learning researchers are more likely to cite the brain as an influence than researchers working in other machine learning fields such as kernel machines or Bayesian statistics, one should not view deep learning as an attempt to simulate the brain. Modern deep learning draws inspiration from many fields, especially applied math fundamentals like linear algebra, probability, information theory, and numerical optimization. While some deep learning researchers cite neuroscience as an important source of inspiration, others are not concerned with 16
  • 33. CHAPTER 1. INTRODUCTION neuroscience at all. It is worth noting that the effort to understand how the brain works on an algorithmic level is alive and well. This endeavor is primarily known as “computational neuroscience” and is a separate field of study from deep learning. It is common for researchers to move back and forth between both fields. The field of deep learning is primarily concerned with how to build computer systems that are able to successfully solve tasks requiring intelligence, while the field of computational neuroscience is primarily concerned with building more accurate models of how the brain actually works. In the 1980s, the second wave of neural network research emerged in great part via a movement called connectionism or parallel distributed process- ing ( , ; , ). Connectionism arose in Rumelhart et al. 1986c McClelland et al. 1995 the context of cognitive science. Cognitive science is an interdisciplinary approach to understanding the mind, combining multiple different levels of analysis. During the early 1980s, most cognitive scientists studied models of symbolic reasoning. Despite their popularity, symbolic models were difficult to explain in terms of how the brain could actually implement them using neurons. The connectionists began to study models of cognition that could actually be grounded in neural implementations (Touretzky and Minton 1985 , ), reviving many ideas dating back to the work of psychologist Donald Hebb in the 1940s ( , ). Hebb 1949 The central idea in connectionism is that a large number of simple computational units can achieve intelligent behavior when networked together. This insight applies equally to neurons in biological nervous systems and to hidden units in computational models. Several key concepts arose during the connectionism movement of the 1980s that remain central to today’s deep learning. One of these concepts is that of distributed representation (Hinton et al., 1986). This is the idea that each input to a system should be represented by many features, and each feature should be involved in the representation of many possible inputs. For example, suppose we have a vision system that can recognize cars, trucks, and birds and these objects can each be red, green, or blue. One way of representing these inputs would be to have a separate neuron or hidden unit that activates for each of the nine possible combinations: red truck, red car, red bird, green truck, and so on. This requires nine different neurons, and each neuron must independently learn the concept of color and object identity. One way to improve on this situation is to use a distributed representation, with three neurons describing the color and three neurons describing the object identity. This requires only six neurons total instead of nine, and the neuron describing redness is able to 17
  • 34. CHAPTER 1. INTRODUCTION learn about redness from images of cars, trucks and birds, not only from images of one specific category of objects. The concept of distributed representation is central to this book, and will be described in greater detail in chapter . 15 Another major accomplishment of the connectionist movement was the suc- cessful use of back-propagation to train deep neural networks with internal repre- sentations and the popularization of the back-propagation algorithm (Rumelhart et al., ; , ). This algorithm has waxed and waned in popularity 1986a LeCun 1987 but as of this writing is currently the dominant approach to training deep models. During the 1990s, researchers made important advances in modeling sequences with neural networks. ( ) and ( ) identified some of Hochreiter 1991 Bengio et al. 1994 the fundamental mathematical difficulties in modeling long sequences, described in section . 10.7 Hochreiter and Schmidhuber 1997 ( ) introduced the long short-term memory or LSTM network to resolve some of these difficulties. Today, the LSTM is widely used for many sequence modeling tasks, including many natural language processing tasks at Google. The second wave of neural networks research lasted until the mid-1990s. Ven- tures based on neural networks and other AI technologies began to make unrealisti- cally ambitious claims while seeking investments. When AI research did not fulfill these unreasonable expectations, investors were disappointed. Simultaneously, other fields of machine learning made advances. Kernel machines ( , Boser et al. 1992 Cortes and Vapnik 1995 Schölkopf 1999 Jor- ; , ; et al., ) and graphical models ( dan 1998 , ) both achieved good results on many important tasks. These two factors led to a decline in the popularity of neural networks that lasted until 2007. During this time, neural networks continued to obtain impressive performance on some tasks ( , ; , ). The Canadian Institute LeCun et al. 1998b Bengio et al. 2001 for Advanced Research (CIFAR) helped to keep neural networks research alive via its Neural Computation and Adaptive Perception (NCAP) research initiative. This program united machine learning research groups led by Geoffrey Hinton at University of Toronto, Yoshua Bengio at University of Montreal, and Yann LeCun at New York University. The CIFAR NCAP research initiative had a multi-disciplinary nature that also included neuroscientists and experts in human and computer vision. At this point in time, deep networks were generally believed to be very difficult to train. We now know that algorithms that have existed since the 1980s work quite well, but this was not apparent circa 2006. The issue is perhaps simply that these algorithms were too computationally costly to allow much experimentation with the hardware available at the time. The third wave of neural networks research began with a breakthrough in 18
  • 35. CHAPTER 1. INTRODUCTION 2006. Geoffrey Hinton showed that a kind of neural network called a deep belief network could be efficiently trained using a strategy called greedy layer-wise pre- training ( , ), which will be described in more detail in section . Hinton et al. 2006 15.1 The other CIFAR-affiliated research groups quickly showed that the same strategy could be used to train many other kinds of deep networks ( , ; Bengio et al. 2007 Ranzato 2007a et al., ) and systematically helped to improve generalization on test examples. This wave of neural networks research popularized the use of the term “deep learning” to emphasize that researchers were now able to train deeper neural networks than had been possible before, and to focus attention on the theoretical importance of depth ( , ; , Bengio and LeCun 2007 Delalleau and Bengio 2011 Pascanu 2014a Montufar 2014 ; et al., ; et al., ). At this time, deep neural networks outperformed competing AI systems based on other machine learning technologies as well as hand-designed functionality. This third wave of popularity of neural networks continues to the time of this writing, though the focus of deep learning research has changed dramatically within the time of this wave. The third wave began with a focus on new unsupervised learning techniques and the ability of deep models to generalize well from small datasets, but today there is more interest in much older supervised learning algorithms and the ability of deep models to leverage large labeled datasets. 1.2.2 Increasing Dataset Sizes One may wonder why deep learning has only recently become recognized as a crucial technology though the first experiments with artificial neural networks were conducted in the 1950s. Deep learning has been successfully used in commercial applications since the 1990s, but was often regarded as being more of an art than a technology and something that only an expert could use, until recently. It is true that some skill is required to get good performance from a deep learning algorithm. Fortunately, the amount of skill required reduces as the amount of training data increases. The learning algorithms reaching human performance on complex tasks today are nearly identical to the learning algorithms that struggled to solve toy problems in the 1980s, though the models we train with these algorithms have undergone changes that simplify the training of very deep architectures. The most important new development is that today we can provide these algorithms with the resources they need to succeed. Figure shows how the size of benchmark 1.8 datasets has increased remarkably over time. This trend is driven by the increasing digitization of society. As more and more of our activities take place on computers, more and more of what we do is recorded. As our computers are increasingly networked together, it becomes easier to centralize these records and curate them 19
  • 36. CHAPTER 1. INTRODUCTION into a dataset appropriate for machine learning applications. The age of “Big Data” has made machine learning much easier because the key burden of statistical estimation—generalizing well to new data after observing only a small amount of data—has been considerably lightened. As of 2016, a rough rule of thumb is that a supervised deep learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category, and will match or exceed human performance when trained with a dataset containing at least 10 million labeled examples. Working successfully with datasets smaller than this is an important research area, focusing in particular on how we can take advantage of large quantities of unlabeled examples, with unsupervised or semi-supervised learning. 1.2.3 Increasing Model Sizes Another key reason that neural networks are wildly successful today after enjoying comparatively little success since the 1980s is that we have the computational resources to run much larger models today. One of the main insights of connection- ism is that animals become intelligent when many of their neurons work together. An individual neuron or small collection of neurons is not particularly useful. Biological neurons are not especially densely connected. As seen in figure , 1.10 our machine learning models have had a number of connections per neuron that was within an order of magnitude of even mammalian brains for decades. In terms of the total number of neurons, neural networks have been astonishingly small until quite recently, as shown in figure . Since the introduction of hidden 1.11 units, artificial neural networks have doubled in size roughly every 2.4 years. This growth is driven by faster computers with larger memory and by the availability of larger datasets. Larger networks are able to achieve higher accuracy on more complex tasks. This trend looks set to continue for decades. Unless new technologies allow faster scaling, artificial neural networks will not have the same number of neurons as the human brain until at least the 2050s. Biological neurons may represent more complicated functions than current artificial neurons, so biological neural networks may be even larger than this plot portrays. In retrospect, it is not particularly surprising that neural networks with fewer neurons than a leech were unable to solve sophisticated artificial intelligence prob- lems. Even today’s networks, which we consider quite large from a computational systems point of view, are smaller than the nervous system of even relatively primitive vertebrate animals like frogs. The increase in model size over time, due to the availability of faster CPUs, 20
  • 37. CHAPTER 1. INTRODUCTION 1900 1950 1985 2000 2015 Y 100 101 102 103 104 105 106 107 108 109 Dataset size (number examples) Iris MNIST Public SVHN ImageNet CIFAR-10 ImageNet10k ILSVRC 2014 Sports-1M Rotated T vs. C T vs. G vs. F Criminals Canadian Hansard WMT Figure 1.8: Dataset sizes have increased greatly over time. In the early 1900s, statisticians studied datasets using hundreds or thousands of manually compiled measurements ( , Garson 1900 Gosset 1908 Anderson 1935 Fisher 1936 ; , ; , ; , ). In the 1950s through 1980s, the pioneers of biologically inspired machine learning often worked with small, synthetic datasets, such as low-resolution bitmaps of letters, that were designed to incur low computational cost and demonstrate that neural networks were able to learn specific kinds of functions (Widrow and Hoff 1960 Rumelhart 1986b , ; et al., ). In the 1980s and 1990s, machine learning became more statistical in nature and began to leverage larger datasets containing tens of thousands of examples such as the MNIST dataset (shown in figure ) of scans 1.9 of handwritten numbers ( , ). In the first decade of the 2000s, more LeCun et al. 1998b sophisticated datasets of this same size, such as the CIFAR-10 dataset (Krizhevsky and Hinton 2009 , ) continued to be produced. Toward the end of that decade and throughout the first half of the 2010s, significantly larger datasets, containing hundreds of thousands to tens of millions of examples, completely changed what was possible with deep learning. These datasets included the public Street View House Numbers dataset ( , Netzer et al. 2011), various versions of the ImageNet dataset ( , , ; Deng et al. 2009 2010a Russakovsky et al. et al. , ), and the Sports-1M dataset ( 2014a Karpathy , ). At the top of the 2014 graph, we see that datasets of translated sentences, such as IBM’s dataset constructed from the Canadian Hansard ( , ) and the WMT 2014 English to French Brown et al. 1990 dataset (Schwenk 2014 , ) are typically far ahead of other dataset sizes. 21
  • 38. CHAPTER 1. INTRODUCTION Figure 1.9: Example inputs from the MNIST dataset. The “NIST” stands for National Institute of Standards and Technology, the agency that originally collected this data. The “M” stands for “modified,” since the data has been preprocessed for easier use with machine learning algorithms. The MNIST dataset consists of scans of handwritten digits and associated labels describing which digit 0–9 is contained in each image. This simple classification problem is one of the simplest and most widely used tests in deep learning research. It remains popular despite being quite easy for modern techniques to solve. Geoffrey Hinton has described it as “the drosophila of machine learning,” meaning that it allows machine learning researchers to study their algorithms in controlled laboratory conditions, much as biologists often study fruit flies. 22
  • 39. CHAPTER 1. INTRODUCTION the advent of general purpose GPUs (described in section ), faster network 12.1.2 connectivity and better software infrastructure for distributed computing, is one of the most important trends in the history of deep learning. This trend is generally expected to continue well into the future. 1.2.4 Increasing Accuracy, Complexity and Real-World Impact Since the 1980s, deep learning has consistently improved in its ability to provide accurate recognition or prediction. Moreover, deep learning has consistently been applied with success to broader and broader sets of applications. The earliest deep models were used to recognize individual objects in tightly cropped, extremely small images ( , ). Since then there has Rumelhart et al. 1986a been a gradual increase in the size of images neural networks could process. Modern object recognition networks process rich high-resolution photographs and do not have a requirement that the photo be cropped near the object to be recognized ( , ). Similarly, the earliest networks could only recognize Krizhevsky et al. 2012 two kinds of objects (or in some cases, the absence or presence of a single kind of object), while these modern networks typically recognize at least 1,000 different categories of objects. The largest contest in object recognition is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) held each year. A dramatic moment in the meteoric rise of deep learning came when a convolutional network won this challenge for the first time and by a wide margin, bringing down the state-of-the-art top-5 error rate from 26.1% to 15.3% ( , ), Krizhevsky et al. 2012 meaning that the convolutional network produces a ranked list of possible categories for each image and the correct category appeared in the first five entries of this list for all but 15.3% of the test examples. Since then, these competitions are consistently won by deep convolutional nets, and as of this writing, advances in deep learning have brought the latest top-5 error rate in this contest down to 3.6%, as shown in figure . 1.12 Deep learning has also had a dramatic impact on speech recognition. After improving throughout the 1990s, the error rates for speech recognition stagnated starting in about 2000. The introduction of deep learning ( , ; Dahl et al. 2010 Deng et al. et al. et al. , ; 2010b Seide , ; 2011 Hinton , ) to speech recognition resulted 2012a in a sudden drop of error rates, with some error rates cut in half. We will explore this history in more detail in section . 12.3 Deep networks have also had spectacular successes for pedestrian detection and image segmentation ( , ; Sermanet et al. 2013 Farabet 2013 Couprie et al., ; et al., 2013) and yielded superhuman performance in traffic sign classification (Ciresan 23
  • 40. CHAPTER 1. INTRODUCTION 1950 1985 2000 2015 101 102 103 104 Connections per neuron 1 2 3 4 5 6 7 8 9 10 Fruit fly Mouse Cat Human Figure 1.10: Initially, the number of connections between neurons in artificial neural networks was limited by hardware capabilities. Today, the number of connections between neurons is mostly a design consideration. Some artificial neural networks have nearly as many connections per neuron as a cat, and it is quite common for other neural networks to have as many connections per neuron as smaller mammals like mice. Even the human brain does not have an exorbitant amount of connections per neuron. Biological neural network sizes from ( ). Wikipedia 2015 1. Adaptive linear element ( , ) Widrow and Hoff 1960 2. Neocognitron (Fukushima 1980 , ) 3. GPU-accelerated convolutional network ( , ) Chellapilla et al. 2006 4. Deep Boltzmann machine (Salakhutdinov and Hinton 2009a , ) 5. Unsupervised convolutional network ( , ) Jarrett et al. 2009 6. GPU-accelerated multilayer perceptron ( , ) Ciresan et al. 2010 7. Distributed autoencoder ( , ) Le et al. 2012 8. Multi-GPU convolutional network ( , ) Krizhevsky et al. 2012 9. COTS HPC unsupervised convolutional network ( , ) Coates et al. 2013 10. GoogLeNet ( , ) Szegedy et al. 2014a 24
  • 41. CHAPTER 1. INTRODUCTION et al., ). 2012 At the same time that the scale and accuracy of deep networks has increased, so has the complexity of the tasks that they can solve. ( ) Goodfellow et al. 2014d showed that neural networks could learn to output an entire sequence of characters transcribed from an image, rather than just identifying a single object. Previously, it was widely believed that this kind of learning required labeling of the individual elements of the sequence ( , ). Recurrent neural networks, Gülçehre and Bengio 2013 such as the LSTM sequence model mentioned above, are now used to model relationships between sequences sequences and other rather than just fixed inputs. This sequence-to-sequence learning seems to be on the cusp of revolutionizing another application: machine translation (Sutskever 2014 Bahdanau et al., ; et al., 2015). This trend of increasing complexity has been pushed to its logical conclusion with the introduction of neural Turing machines (Graves 2014a et al., ) that learn to read from memory cells and write arbitrary content to memory cells. Such neural networks can learn simple programs from examples of desired behavior. For example, they can learn to sort lists of numbers given examples of scrambled and sorted sequences. This self-programming technology is in its infancy, but in the future could in principle be applied to nearly any task. Another crowning achievement of deep learning is its extension to the domain of reinforcement learning. In the context of reinforcement learning, an autonomous agent must learn to perform a task by trial and error, without any guidance from the human operator. DeepMind demonstrated that a reinforcement learning system based on deep learning is capable of learning to play Atari video games, reaching human-level performance on many tasks ( , ). Deep learning has Mnih et al. 2015 also significantly improved the performance of reinforcement learning for robotics ( , ). Finn et al. 2015 Many of these applications of deep learning are highly profitable. Deep learning is now used by many top technology companies including Google, Microsoft, Facebook, IBM, Baidu, Apple, Adobe, Netflix, NVIDIA and NEC. Advances in deep learning have also depended heavily on advances in software infrastructure. Software libraries such as Theano ( , ; Bergstra et al. 2010 Bastien et al. et al. , ), PyLearn2 ( 2012 Goodfellow , ), Torch ( , ), 2013c Collobert et al. 2011b DistBelief ( , ), Caffe ( , ), MXNet ( , ), and Dean et al. 2012 Jia 2013 Chen et al. 2015 TensorFlow ( , ) have all supported important research projects or Abadi et al. 2015 commercial products. Deep learning has also made contributions back to other sciences. Modern convolutional networks for object recognition provide a model of visual processing 25
  • 42. CHAPTER 1. INTRODUCTION that neuroscientists can study ( , ). Deep learning also provides useful DiCarlo 2013 tools for processing massive amounts of data and making useful predictions in scientific fields. It has been successfully used to predict how molecules will interact in order to help pharmaceutical companies design new drugs ( , ), Dahl et al. 2014 to search for subatomic particles ( , ), and to automatically parse Baldi et al. 2014 microscope images used to construct a 3-D map of the human brain (Knowles- Barley 2014 et al., ). We expect deep learning to appear in more and more scientific fields in the future. In summary, deep learning is an approach to machine learning that has drawn heavily on our knowledge of the human brain, statistics and applied math as it developed over the past several decades. In recent years, it has seen tremendous growth in its popularity and usefulness, due in large part to more powerful com- puters, larger datasets and techniques to train deeper networks. The years ahead are full of challenges and opportunities to improve deep learning even further and bring it to new frontiers. 26
  • 43. CHAPTER 1. INTRODUCTION 1950 1985 2000 2015 2056 10−2 10−1 100 101 102 103 104 105 106 107 108 109 1010 1011 Number of neurons (logarithmic scale) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Sponge Roundworm Leech Ant Bee Frog Octopus Human Figure 1.11: Since the introduction of hidden units, artificial neural networks have doubled in size roughly every 2.4 years. Biological neural network sizes from ( ). Wikipedia 2015 1. Perceptron ( , , ) Rosenblatt 1958 1962 2. Adaptive linear element ( , ) Widrow and Hoff 1960 3. Neocognitron (Fukushima 1980 , ) 4. Early back-propagation network ( , ) Rumelhart et al. 1986b 5. Recurrent neural network for speech recognition (Robinson and Fallside 1991 , ) 6. Multilayer perceptron for speech recognition ( , ) Bengio et al. 1991 7. Mean field sigmoid belief network ( , ) Saul et al. 1996 8. LeNet-5 ( , ) LeCun et al. 1998b 9. Echo state network ( , ) Jaeger and Haas 2004 10. Deep belief network ( , ) Hinton et al. 2006 11. GPU-accelerated convolutional network ( , ) Chellapilla et al. 2006 12. Deep Boltzmann machine (Salakhutdinov and Hinton 2009a , ) 13. GPU-accelerated deep belief network ( , ) Raina et al. 2009 14. Unsupervised convolutional network ( , ) Jarrett et al. 2009 15. GPU-accelerated multilayer perceptron ( , ) Ciresan et al. 2010 16. OMP-1 network ( , ) Coates and Ng 2011 17. Distributed autoencoder ( , ) Le et al. 2012 18. Multi-GPU convolutional network ( , ) Krizhevsky et al. 2012 19. COTS HPC unsupervised convolutional network ( , ) Coates et al. 2013 20. GoogLeNet ( , ) Szegedy et al. 2014a 27
  • 44. CHAPTER 1. INTRODUCTION 2010 2011 2012 2013 2014 2015 0 00 . 0 05 . 0 10 . 0 15 . 0 20 . 0 25 . 0 30 . ILSVRC classification error rate Figure 1.12: Since deep networks reached the scale necessary to compete in the ImageNet Large Scale Visual Recognition Challenge, they have consistently won the competition every year, and yielded lower and lower error rates each time. Data from Russakovsky et al. et al. ( ) and 2014b He ( ). 2015 28
  • 45. Part I Applied Math and Machine Learning Basics 29
  • 46. This part of the book introduces the basic mathematical concepts needed to understand deep learning. We begin with general ideas from applied math that allow us to define functions of many variables, find the highest and lowest points on these functions and quantify degrees of belief. Next, we describe the fundamental goals of machine learning. We describe how to accomplish these goals by specifying a model that represents certain beliefs, designing a cost function that measures how well those beliefs correspond with reality and using a training algorithm to minimize that cost function. This elementary framework is the basis for a broad variety of machine learning algorithms, including approaches to machine learning that are not deep. In the subsequent parts of the book, we develop deep learning algorithms within this framework. 30
  • 47. Chapter 2 Linear Algebra Linear algebra is a branch of mathematics that is widely used throughout science and engineering. However, because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. A good understanding of linear algebra is essential for understanding and working with many machine learning algorithms, especially deep learning algorithms. We therefore precede our introduction to deep learning with a focused presentation of the key linear algebra prerequisites. If you are already familiar with linear algebra, feel free to skip this chapter. If you have previous experience with these concepts but need a detailed reference sheet to review key formulas, we recommend The Matrix Cookbook (Petersen and Pedersen 2006 , ). If you have no exposure at all to linear algebra, this chapter will teach you enough to read this book, but we highly recommend that you also consult another resource focused exclusively on teaching linear algebra, such as Shilov 1977 ( ). This chapter will completely omit many important linear algebra topics that are not essential for understanding deep learning. 2.1 Scalars, Vectors, Matrices and Tensors The study of linear algebra involves several types of mathematical objects: • Scalars: A scalar is just a single number, in contrast to most of the other objects studied in linear algebra, which are usually arrays of multiple numbers. We write scalars in italics. We usually give scalars lower-case variable names. When we introduce them, we specify what kind of number they are. For 31
  • 48. CHAPTER 2. LINEAR ALGEBRA example, we might say “Let s ∈ R be the slope of the line,” while defining a real-valued scalar, or “Let n ∈ N be the number of units,” while defining a natural number scalar. • Vectors: A vector is an array of numbers. The numbers are arranged in order. We can identify each individual number by its index in that ordering. Typically we give vectors lower case names written in bold typeface, such as x. The elements of the vector are identified by writing its name in italic typeface, with a subscript. The first element of x is x1, the second element is x2 and so on. We also need to say what kind of numbers are stored in the vector. If each element is in R, and the vector has n elements, then the vector lies in the set formed by taking the Cartesian product of R n times, denoted as Rn. When we need to explicitly identify the elements of a vector, we write them as a column enclosed in square brackets: x =      x1 x2 . . . xn      . (2.1) We can think of vectors as identifying points in space, with each element giving the coordinate along a different axis. Sometimes we need to index a set of elements of a vector. In this case, we define a set containing the indices and write the set as a subscript. For example, to access x1, x3 and x6, we define the set S = {1, 3,6} and write xS. We use the − sign to index the complement of a set. For example x−1 is the vector containing all elements of x except for x1, and x−S is the vector containing all of the elements of except for x x1, x3 and x6. • Matrices: A matrix is a 2-D array of numbers, so each element is identified by two indices instead of just one. We usually give matrices upper-case variable names with bold typeface, such as A. If a real-valued matrix A has a height of m and a width of n, then we say that A ∈ Rm n × . We usually identify the elements of a matrix using its name in italic but not bold font, and the indices are listed with separating commas. For example, A1 1 , is the upper left entry of A and Am,n is the bottom right entry. We can identify all of the numbers with vertical coordinate i by writing a “ ” for the horizontal : coordinate. For example, Ai,: denotes the horizontal cross section of A with vertical coordinate i. This is known as the i-th row of A. Likewise, A:,i is 32
  • 49. CHAPTER 2. LINEAR ALGEBRA A =   A1 1 , A1 2 , A2 1 , A2 2 , A3 1 , A3 2 ,   ⇒ A =  A1 1 , A2 1 , A3 1 , A1 2 , A2 2 , A3 2 ,  Figure 2.1: The transpose of the matrix can be thought of as a mirror image across the main diagonal. the -th of . When we need to explicitly identify the elements of i column A a matrix, we write them as an array enclosed in square brackets:  A1 1 , A1 2 , A2 1 , A2 2 ,  . (2.2) Sometimes we may need to index matrix-valued expressions that are not just a single letter. In this case, we use subscripts after the expression, but do not convert anything to lower case. For example, f(A)i,j gives element (i, j) of the matrix computed by applying the function to . f A • Tensors: In some cases we will need an array with more than two axes. In the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor. We denote a tensor named “A” with this typeface: A. We identify the element of A at coordinates (i, j, k) by writing Ai,j,k. One important operation on matrices is the transpose. The transpose of a matrix is the mirror image of the matrix across a diagonal line, called the main diagonal, running down and to the right, starting from its upper left corner. See figure for a graphical depiction of this operation. We denote the transpose of a 2.1 matrix as A A, and it is defined such that (A )i,j = Aj,i. (2.3) Vectors can be thought of as matrices that contain only one column. The transpose of a vector is therefore a matrix with only one row. Sometimes we 33
  • 50. CHAPTER 2. LINEAR ALGEBRA define a vector by writing out its elements in the text inline as a row matrix, then using the transpose operator to turn it into a standard column vector, e.g., x = [x1, x2, x3] . A scalar can be thought of as a matrix with only a single entry. From this, we can see that a scalar is its own transpose: a a = . We can add matrices to each other, as long as they have the same shape, just by adding their corresponding elements: where C A B = + Ci,j = Ai,j + Bi,j. We can also add a scalar to a matrix or multiply a matrix by a scalar, just by performing that operation on each element of a matrix: D = a · B + c where Di,j = a B · i,j + c. In the context of deep learning, we also use some less conventional notation. We allow the addition of matrix and a vector, yielding another matrix: C = A +b, where Ci,j = Ai,j + bj. In other words, the vector b is added to each row of the matrix. This shorthand eliminates the need to define a matrix with b copied into each row before doing the addition. This implicit copying of b to many locations is called . broadcasting 2.2 Multiplying Matrices and Vectors One of the most important operations involving matrices is multiplication of two matrices. The matrix product of matrices A and B is a third matrix C. In order for this product to be defined, A must have the same number of columns as B has rows. If A is of shape m n × and B is of shape n p × , then C is of shape m p × . We can write the matrix product just by placing two or more matrices together, e.g. C AB = . (2.4) The product operation is defined by Ci,j =  k Ai,kBk,j. (2.5) Note that the standard product of two matrices is just a matrix containing not the product of the individual elements. Such an operation exists and is called the element-wise product Hadamard product or , and is denoted as . A B  The dot product between two vectors x and y of the same dimensionality is the matrix product xy. We can think of the matrix product C = AB as computing Ci,j as the dot product between row of and column of . i A j B 34
  • 51. CHAPTER 2. LINEAR ALGEBRA Matrix product operations have many useful properties that make mathematical analysis of matrices more convenient. For example, matrix multiplication is distributive: A B C AB AC ( + ) = + . (2.6) It is also associative: A BC AB C ( ) = ( ) . (2.7) Matrix multiplication is commutative (the condition not AB = BA does not always hold), unlike scalar multiplication. However, the dot product between two vectors is commutative: x y y =  x. (2.8) The transpose of a matrix product has a simple form: ( ) AB  = B A . (2.9) This allows us to demonstrate equation , by exploiting the fact that the value 2.8 of such a product is a scalar and therefore equal to its own transpose: x y =  x y  = y x. (2.10) Since the focus of this textbook is not linear algebra, we do not attempt to develop a comprehensive list of useful properties of the matrix product here, but the reader should be aware that many more exist. We now know enough linear algebra notation to write down a system of linear equations: Ax b = (2.11) where A ∈ Rm n × is a known matrix, b ∈ Rm is a known vector, and x ∈ Rn is a vector of unknown variables we would like to solve for. Each element xi of x is one of these unknown variables. Each row of A and each element of b provide another constraint. We can rewrite equation as: 2.11 A1 : , x = b1 (2.12) A2 : , x = b2 (2.13) . . . (2.14) Am,:x = bm (2.15) or, even more explicitly, as: A1 1 , x1 + A1 2 , x2 + + · · · A1,nxn = b1 (2.16) 35
  • 52. CHAPTER 2. LINEAR ALGEBRA   1 0 0 0 1 0 0 0 1   Figure 2.2: Example identity matrix: This is I3. A2 1 , x1 + A2 2 , x2 + + · · · A2,nxn = b2 (2.17) . . . (2.18) Am,1x1 + Am,2x2 + + · · · Am,nxn = bm . (2.19) Matrix-vector product notation provides a more compact representation for equations of this form. 2.3 Identity and Inverse Matrices Linear algebra offers a powerful tool called matrix inversion that allows us to analytically solve equation for many values of . 2.11 A To describe matrix inversion, we first need to define the concept of an identity matrix. An identity matrix is a matrix that does not change any vector when we multiply that vector by that matrix. We denote the identity matrix that preserves n-dimensional vectors as In. Formally, In ∈ Rn n × , and ∀ ∈ x Rn , Inx x = . (2.20) The structure of the identity matrix is simple: all of the entries along the main diagonal are 1, while all of the other entries are zero. See figure for an example. 2.2 The matrix inverse of A is denoted as A−1, and it is defined as the matrix such that A−1 A I = n. (2.21) We can now solve equation by the following steps: 2.11 Ax b = (2.22) A−1 Ax A = −1 b (2.23) Inx A = −1 b (2.24) 36
  • 53. CHAPTER 2. LINEAR ALGEBRA x A = −1 b. (2.25) Of course, this process depends on it being possible to find A−1 . We discuss the conditions for the existence of A−1 in the following section. When A−1 exists, several different algorithms exist for finding it in closed form. In theory, the same inverse matrix can then be used to solve the equation many times for different values of b. However, A −1 is primarily useful as a theoretical tool, and should not actually be used in practice for most software applications. Because A−1 can be represented with only limited precision on a digital computer, algorithms that make use of the value of b can usually obtain more accurate estimates of . x 2.4 Linear Dependence and Span In order for A−1 to exist, equation must have exactly one solution for every 2.11 value of b. However, it is also possible for the system of equations to have no solutions or infinitely many solutions for some values of b. It is not possible to have more than one but less than infinitely many solutions for a particular b; if both and are solutions then x y z x y = α + (1 ) − α (2.26) is also a solution for any real . α To analyze how many solutions the equation has, we can think of the columns of A as specifying different directions we can travel from the origin (the point specified by the vector of all zeros), and determine how many ways there are of reaching b. In this view, each element of x specifies how far we should travel in each of these directions, with xi specifying how far to move in the direction of column : i Ax =  i xiA:,i. (2.27) In general, this kind of operation is called a linear combination. Formally, a linear combination of some set of vectors {v(1) , . . . , v( ) n } is given by multiplying each vector v( ) i by a corresponding scalar coefficient and adding the results:  i civ( ) i . (2.28) The span of a set of vectors is the set of all points obtainable by linear combination of the original vectors. 37
  • 54. CHAPTER 2. LINEAR ALGEBRA Determining whether Ax = b has a solution thus amounts to testing whether b is in the span of the columns of A. This particular span is known as the column space range or the of . A In order for the system Ax = b to have a solution for all values of b ∈ Rm, we therefore require that the column space of A be all of Rm. If any point in R m is excluded from the column space, that point is a potential value of b that has no solution. The requirement that the column space of A be all of Rm implies immediately that A must have at least m columns, i.e., n m ≥ . Otherwise, the dimensionality of the column space would be less than m. For example, consider a 3 × 2 matrix. The target b is 3-D, but x is only 2-D, so modifying the value of x at best allows us to trace out a 2-D plane within R3 . The equation has a solution if and only if lies on that plane. b Having n m ≥ is only a necessary condition for every point to have a solution. It is not a sufficient condition, because it is possible for some of the columns to be redundant. Consider a 2 ×2 matrix where both of the columns are identical. This has the same column space as a 2 × 1 matrix containing only one copy of the replicated column. In other words, the column space is still just a line, and fails to encompass all of R2 , even though there are two columns. Formally, this kind of redundancy is known as linear dependence. A set of vectors is linearly independent if no vector in the set is a linear combination of the other vectors. If we add a vector to a set that is a linear combination of the other vectors in the set, the new vector does not add any points to the set’s span. This means that for the column space of the matrix to encompass all of Rm , the matrix must contain at least one set of m linearly independent columns. This condition is both necessary and sufficient for equation to have a solution for 2.11 every value of b. Note that the requirement is for a set to have exactly m linear independent columns, not at least m. No set of m-dimensional vectors can have more than m mutually linearly independent columns, but a matrix with more than m columns may have more than one such set. In order for the matrix to have an inverse, we additionally need to ensure that equation has one solution for each value of 2.11 at most b. To do so, we need to ensure that the matrix has at most m columns. Otherwise there is more than one way of parametrizing each solution. Together, this means that the matrix must be square, that is, we require that m = n and that all of the columns must be linearly independent. A square matrix with linearly dependent columns is known as . singular If A is not square or is square but singular, it can still be possible to solve the equation. However, we can not use the method of matrix inversion to find the 38
  • 55. CHAPTER 2. LINEAR ALGEBRA solution. So far we have discussed matrix inverses as being multiplied on the left. It is also possible to define an inverse that is multiplied on the right: AA−1 = I. (2.29) For square matrices, the left inverse and right inverse are equal. 2.5 Norms Sometimes we need to measure the size of a vector. In machine learning, we usually measure the size of vectors using a function called a norm. Formally, the Lp norm is given by || || x p =   i |xi|p 1 p (2.30) for p , p . ∈ R ≥ 1 Norms, including the Lp norm, are functions mapping vectors to non-negative values. On an intuitive level, the norm of a vector x measures the distance from the origin to the point x. More rigorously, a norm is any function f that satisfies the following properties: • ⇒ f( ) = 0 x x = 0 • ≤ f( + ) x y f f ( ) + x ( ) y (the triangle inequality) • ∀ ∈ | | α R, f α ( x) = α f( ) x The L2 norm, with p = 2, is known as the Euclidean norm. It is simply the Euclidean distance from the origin to the point identified by x. The L2 norm is used so frequently in machine learning that it is often denoted simply as || || x , with the subscript omitted. It is also common to measure the size of a vector using 2 the squared L2 norm, which can be calculated simply as xx. The squared L2 norm is more convenient to work with mathematically and computationally than the L2 norm itself. For example, the derivatives of the squared L2 norm with respect to each element of x each depend only on the corresponding element of x, while all of the derivatives of the L2 norm depend on the entire vector. In many contexts, the squared L2 norm may be undesirable because it increases very slowly near the origin. In several machine learning 39
  • 56. CHAPTER 2. LINEAR ALGEBRA applications, it is important to discriminate between elements that are exactly zero and elements that are small but nonzero. In these cases, we turn to a function that grows at the same rate in all locations, but retains mathematical simplicity: the L1 norm. The L1 norm may be simplified to || || x 1 =  i |xi |. (2.31) The L1 norm is commonly used in machine learning when the difference between zero and nonzero elements is very important. Every time an element of x moves away from 0 by , the  L1 norm increases by .  We sometimes measure the size of the vector by counting its number of nonzero elements. Some authors refer to this function as the “L0 norm,” but this is incorrect terminology. The number of non-zero entries in a vector is not a norm, because scaling the vector by α does not change the number of nonzero entries. The L1 norm is often used as a substitute for the number of nonzero entries. One other norm that commonly arises in machine learning is the L∞ norm, also known as the max norm. This norm simplifies to the absolute value of the element with the largest magnitude in the vector, || || x ∞ = max i |xi|. (2.32) Sometimes we may also wish to measure the size of a matrix. In the context of deep learning, the most common way to do this is with the otherwise obscure Frobenius norm: || || A F =  i,j A2 i,j, (2.33) which is analogous to the L2 norm of a vector. The dot product of two vectors can be rewritten in terms of norms. Specifically, x y x = || ||2|| || y 2 cos θ (2.34) where is the angle between and . θ x y 2.6 Special Kinds of Matrices and Vectors Some special kinds of matrices and vectors are particularly useful. Diagonal matrices consist mostly of zeros and have non-zero entries only along the main diagonal. Formally, a matrix D is diagonal if and only if Di,j = 0 for 40
  • 57. CHAPTER 2. LINEAR ALGEBRA all i = j. We have already seen one example of a diagonal matrix: the identity matrix, where all of the diagonal entries are 1. We write diag(v) to denote a square diagonal matrix whose diagonal entries are given by the entries of the vector v. Diagonal matrices are of interest in part because multiplying by a diagonal matrix is very computationally efficient. To compute diag(v)x, we only need to scale each element xi by vi. In other words, diag(v)x = v x  . Inverting a square diagonal matrix is also efficient. The inverse exists only if every diagonal entry is nonzero, and in that case, diag(v)−1 = diag([1/v1, . . . ,1/vn ]). In many cases, we may derive some very general machine learning algorithm in terms of arbitrary matrices, but obtain a less expensive (and less descriptive) algorithm by restricting some matrices to be diagonal. Not all diagonal matrices need be square. It is possible to construct a rectangular diagonal matrix. Non-square diagonal matrices do not have inverses but it is still possible to multiply by them cheaply. For a non-square diagonal matrix D, the product Dx will involve scaling each element of x, and either concatenating some zeros to the result if D is taller than it is wide, or discarding some of the last elements of the vector if is wider than it is tall. D A matrix is any matrix that is equal to its own transpose: symmetric A A =  . (2.35) Symmetric matrices often arise when the entries are generated by some function of two arguments that does not depend on the order of the arguments. For example, if A is a matrix of distance measurements, with Ai,j giving the distance from point i to point , then j Ai,j = Aj,i because distance functions are symmetric. A is a vector with : unit vector unit norm || || x 2 = 1. (2.36) A vector x and a vector y are orthogonal to each other if x y = 0. If both vectors have nonzero norm, this means that they are at a 90 degree angle to each other. In Rn , at most n vectors may be mutually orthogonal with nonzero norm. If the vectors are not only orthogonal but also have unit norm, we call them orthonormal. An orthogonal matrix is a square matrix whose rows are mutually orthonor- mal and whose columns are mutually orthonormal: A A AA =  = I. (2.37) 41
  • 58. CHAPTER 2. LINEAR ALGEBRA This implies that A−1 = A , (2.38) so orthogonal matrices are of interest because their inverse is very cheap to compute. Pay careful attention to the definition of orthogonal matrices. Counterintuitively, their rows are not merely orthogonal but fully orthonormal. There is no special term for a matrix whose rows or columns are orthogonal but not orthonormal. 2.7 Eigendecomposition Many mathematical objects can be understood better by breaking them into constituent parts, or finding some properties of them that are universal, not caused by the way we choose to represent them. For example, integers can be decomposed into prime factors. The way we represent the number will change depending on whether we write it in base ten 12 or in binary, but it will always be true that 12 = 2× 2×3. From this representation we can conclude useful properties, such as that is not divisible by , or that any 12 5 integer multiple of will be divisible by . 12 3 Much as we can discover something about the true nature of an integer by decomposing it into prime factors, we can also decompose matrices in ways that show us information about their functional properties that is not obvious from the representation of the matrix as an array of elements. One of the most widely used kinds of matrix decomposition is called eigen- decomposition, in which we decompose a matrix into a set of eigenvectors and eigenvalues. An eigenvector of a square matrix A is a non-zero vector v such that multi- plication by alters only the scale of : A v Av v = λ . (2.39) The scalar λ is known as the eigenvalue corresponding to this eigenvector. (One can also find a left eigenvector such that vA = λv, but we are usually concerned with right eigenvectors). If v is an eigenvector of A, then so is any rescaled vector sv for s , s ∈ R = 0. Moreover, sv still has the same eigenvalue. For this reason, we usually only look for unit eigenvectors. Suppose that a matrix A has n linearly independent eigenvectors, {v(1) , . . . , v( ) n }, with corresponding eigenvalues {λ1, . . . , λn}. We may concatenate all of the 42
  • 59. CHAPTER 2. LINEAR ALGEBRA 󰤓 󰤓 󰤓      󰤓 󰤓 󰤓           󰤓 󰤓 󰤓       󰤓 󰤓 󰤓                 Figure 2.3: An example of the effect of eigenvectors and eigenvalues. Here, we have a matrix A with two orthonormal eigenvectors, v(1) with eigenvalue λ1 and v(2) with eigenvalue λ2. (Left)We plot the set of all unit vectors u ∈ R2 as a unit circle. (Right)We plot the set of all points Au. By observing the way that A distorts the unit circle, we can see that it scales space in direction v( ) i by λi. eigenvectors to form a matrix V with one eigenvector per column: V = [v(1), . . . , v( ) n ]. Likewise, we can concatenate the eigenvalues to form a vector λ = [λ1, . . . , λn ]. The of is then given by eigendecomposition A A V λ V = diag( ) −1 . (2.40) We have seen that constructing matrices with specific eigenvalues and eigenvec- tors allows us to stretch space in desired directions. However, we often want to decompose matrices into their eigenvalues and eigenvectors. Doing so can help us to analyze certain properties of the matrix, much as decomposing an integer into its prime factors can help us understand the behavior of that integer. Not every matrix can be decomposed into eigenvalues and eigenvectors. In some 43
  • 60. CHAPTER 2. LINEAR ALGEBRA cases, the decomposition exists, but may involve complex rather than real numbers. Fortunately, in this book, we usually need to decompose only a specific class of matrices that have a simple decomposition. Specifically, every real symmetric matrix can be decomposed into an expression using only real-valued eigenvectors and eigenvalues: A Q Q = Λ  , (2.41) where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a diagonal matrix. The eigenvalue Λi,i is associated with the eigenvector in column i of Q, denoted as Q:,i. Because Q is an orthogonal matrix, we can think of A as scaling space by λi in direction v( ) i . See figure for an example. 2.3 While any real symmetric matrix A is guaranteed to have an eigendecomposi- tion, the eigendecomposition may not be unique. If any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a Q using those eigenvectors instead. By convention, we usually sort the entries of Λ in descending order. Under this convention, the eigendecomposition is unique only if all of the eigenvalues are unique. The eigendecomposition of a matrix tells us many useful facts about the matrix. The matrix is singular if and only if any of the eigenvalues are zero. The eigendecomposition of a real symmetric matrix can also be used to optimize quadratic expressions of the form f(x) = x Ax subject to || || x 2 = 1. Whenever x is equal to an eigenvector of A, f takes on the value of the corresponding eigenvalue. The maximum value of f within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue. A matrix whose eigenvalues are all positive is called positive definite. A matrix whose eigenvalues are all positive or zero-valued is calledpositive semidefi- nite. Likewise, if all eigenvalues are negative, the matrix is negative definite, and if all eigenvalues are negative or zero-valued, it is negative semidefinite. Positive semidefinite matrices are interesting because they guarantee that ∀x x , Ax ≥ 0. Positive definite matrices additionally guarantee that x Ax x = 0 ⇒ = 0. 2.8 Singular Value Decomposition In section , we saw how to decompose a matrix into eigenvectors and eigenvalues. 2.7 The singular value decomposition (SVD) provides another way to factorize a matrix, into singular vectors and singular values. The SVD allows us to discover some of the same kind of information as the eigendecomposition. However, 44
  • 61. CHAPTER 2. LINEAR ALGEBRA the SVD is more generally applicable. Every real matrix has a singular value decomposition, but the same is not true of the eigenvalue decomposition. For example, if a matrix is not square, the eigendecomposition is not defined, and we must use a singular value decomposition instead. Recall that the eigendecomposition involves analyzing a matrix A to discover a matrix V of eigenvectors and a vector of eigenvalues λ such that we can rewrite A as A V λ V = diag( ) −1 . (2.42) The singular value decomposition is similar, except this time we will write A as a product of three matrices: A UDV =  . (2.43) Suppose that A is an m n × matrix. Then U is defined to be an m m × matrix, D V to be an matrix, and m n × to be an matrix. n n × Each of these matrices is defined to have a special structure. The matrices U and V are both defined to be orthogonal matrices. The matrix D is defined to be a diagonal matrix. Note that is not necessarily square. D The elements along the diagonal of D are known as the singular values of the matrix A. The columns of U are known as the left-singular vectors. The columns of are known as as the V right-singular vectors. We can actually interpret the singular value decomposition of A in terms of the eigendecomposition of functions of A. The left-singular vectors of A are the eigenvectors of AA. The right-singular vectors of A are the eigenvectors of A A. The non-zero singular values of A are the square roots of the eigenvalues of A A. The same is true for AA . Perhaps the most useful feature of the SVD is that we can use it to partially generalize matrix inversion to non-square matrices, as we will see in the next section. 2.9 The Moore-Penrose Pseudoinverse Matrix inversion is not defined for matrices that are not square. Suppose we want to make a left-inverse of a matrix , so that we can solve a linear equation B A Ax y = (2.44) 45
  • 62. CHAPTER 2. LINEAR ALGEBRA by left-multiplying each side to obtain x By = . (2.45) Depending on the structure of the problem, it may not be possible to design a unique mapping from to . A B If A is taller than it is wide, then it is possible for this equation to have no solution. If A is wider than it is tall, then there could be multiple possible solutions. The Moore-Penrose pseudoinverse allows us to make some headway in these cases. The pseudoinverse of is defined as a matrix A A+ = lim α0 (A A I + α )−1 A . (2.46) Practical algorithms for computing the pseudoinverse are not based on this defini- tion, but rather the formula A+ = V D+ U  , (2.47) where U, D and V are the singular value decomposition of A, and the pseudoinverse D+ of a diagonal matrix D is obtained by taking the reciprocal of its non-zero elements then taking the transpose of the resulting matrix. When A has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. Specifically, it provides the solution x = A+ y with minimal Euclidean norm || || x 2 among all possible solutions. When A has more rows than columns, it is possible for there to be no solution. In this case, using the pseudoinverse gives us the x for which Ax is as close as possible to in terms of Euclidean norm y || − || Ax y 2. 2.10 The Trace Operator The trace operator gives the sum of all of the diagonal entries of a matrix: Tr( ) = A  i Ai,i. (2.48) The trace operator is useful for a variety of reasons. Some operations that are difficult to specify without resorting to summation notation can be specified using 46
  • 63. CHAPTER 2. LINEAR ALGEBRA matrix products and the trace operator. For example, the trace operator provides an alternative way of writing the Frobenius norm of a matrix: || || A F =  Tr(AA). (2.49) Writing an expression in terms of the trace operator opens up opportunities to manipulate the expression using many useful identities. For example, the trace operator is invariant to the transpose operator: Tr( ) = Tr( A A ). (2.50) The trace of a square matrix composed of many factors is also invariant to moving the last factor into the first position, if the shapes of the corresponding matrices allow the resulting product to be defined: Tr( ) = Tr( ) = Tr( ) ABC CAB BCA (2.51) or more generally, Tr( n  i=1 F( ) i ) = Tr(F( ) n n−1  i=1 F( ) i ). (2.52) This invariance to cyclic permutation holds even if the resulting product has a different shape. For example, for A ∈ Rm n × and B ∈ Rn m × , we have Tr( ) = Tr( ) AB BA (2.53) even though AB ∈ Rm m × and BA ∈ Rn n × . Another useful fact to keep in mind is that a scalar is its own trace: a = Tr(a). 2.11 The Determinant The determinant of a square matrix, denoted det(A), is a function mapping matrices to real scalars. The determinant is equal to the product of all the eigenvalues of the matrix. The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space. If the determinant is 0, then space is contracted completely along at least one dimension, causing it to lose all of its volume. If the determinant is 1, then the transformation preserves volume. 47
  • 64. CHAPTER 2. LINEAR ALGEBRA 2.12 Example: Principal Components Analysis One simple machine learning algorithm, principal components analysis or PCA can be derived using only knowledge of basic linear algebra. Suppose we have a collection of m points {x(1), . . . , x( ) m } in Rn. Suppose we would like to apply lossy compression to these points. Lossy compression means storing the points in a way that requires less memory but may lose some precision. We would like to lose as little precision as possible. One way we can encode these points is to represent a lower-dimensional version of them. For each point x( ) i ∈ Rn we will find a corresponding code vector c( ) i ∈ Rl. If l is smaller than n, it will take less memory to store the code points than the original data. We will want to find some encoding function that produces the code for an input, f(x) = c, and a decoding function that produces the reconstructed input given its code, . x x ≈ g f ( ( )) PCA is defined by our choice of the decoding function. Specifically, to make the decoder very simple, we choose to use matrix multiplication to map the code back into Rn. Let , where g( ) = c Dc D ∈ Rn l × is the matrix defining the decoding. Computing the optimal code for this decoder could be a difficult problem. To keep the encoding problem easy, PCA constrains the columns of D to be orthogonal to each other. (Note that D is still not technically “an orthogonal matrix” unless l n = ) With the problem as described so far, many solutions are possible, because we can increase the scale of D:,i if we decrease ci proportionally for all points. To give the problem a unique solution, we constrain all of the columns of to have unit D norm. In order to turn this basic idea into an algorithm we can implement, the first thing we need to do is figure out how to generate the optimal code point c∗ for each input point x. One way to do this is to minimize the distance between the input point x and its reconstruction, g(c∗). We can measure this distance using a norm. In the principal components algorithm, we use the L2 norm: c∗ = arg min c || − || x g( ) c 2. (2.54) We can switch to the squared L2 norm instead of the L2 norm itself, because both are minimized by the same value of c. Both are minimized by the same value of c because the L2 norm is non-negative and the squaring operation is 48
  • 65. CHAPTER 2. LINEAR ALGEBRA monotonically increasing for non-negative arguments. c∗ = arg min c || − || x g( ) c 2 2. (2.55) The function being minimized simplifies to ( ( )) x − g c  ( ( )) x − g c (2.56) (by the definition of the L2 norm, equation ) 2.30 = x x x −  g g ( ) c − ( ) c  x c + ( g ) g( ) c (2.57) (by the distributive property) = xx x − 2  g g ( ) + c ( ) c  g( ) c (2.58) (because the scalar g( ) c x is equal to the transpose of itself). We can now change the function being minimized again, to omit the first term, since this term does not depend on : c c∗ = arg min c −2x g g ( ) + c ( ) c  g . ( ) c (2.59) To make further progress, we must substitute in the definition of : g( ) c c∗ = arg min c −2x Dc c +  D Dc (2.60) = arg min c −2x Dc c +  Il c (2.61) (by the orthogonality and unit norm constraints on ) D = arg min c −2x Dc c +  c (2.62) We can solve this optimization problem using vector calculus (see section if 4.3 you do not know how to do this): ∇c( 2 − x Dc c +  c) = 0 (2.63) − 2D x c + 2 = 0 (2.64) c D =  x. (2.65) 49
  • 66. CHAPTER 2. LINEAR ALGEBRA This makes the algorithm efficient: we can optimally encode x just using a matrix-vector operation. To encode a vector, we apply the encoder function f( ) = x D x. (2.66) Using a further matrix multiplication, we can also define the PCA reconstruction operation: r g f ( ) = x ( ( )) = x DD x. (2.67) Next, we need to choose the encoding matrix D. To do so, we revisit the idea of minimizing the L2 distance between inputs and reconstructions. Since we will use the same matrix D to decode all of the points, we can no longer consider the points in isolation. Instead, we must minimize the Frobenius norm of the matrix of errors computed over all dimensions and all points: D∗ = arg min D   i,j  x ( ) i j − r(x( ) i )j 2 subject to D D I = l (2.68) To derive the algorithm for finding D∗, we will start by considering the case where l = 1. In this case, D is just a single vector, d. Substituting equation 2.67 into equation and simplifying into , the problem reduces to 2.68 D d d∗ = arg min d  i ||x( ) i − dd x( ) i ||2 2 subject to || || d 2 = 1. (2.69) The above formulation is the most direct way of performing the substitution, but is not the most stylistically pleasing way to write the equation. It places the scalar value dx( ) i on the right of the vector d. It is more conventional to write scalar coefficients on the left of vector they operate on. We therefore usually write such a formula as d∗ = arg min d  i ||x( ) i − d x( ) i d||2 2 subject to || || d 2 = 1, (2.70) or, exploiting the fact that a scalar is its own transpose, as d∗ = arg min d  i ||x( ) i − x( ) i  dd||2 2 subject to || || d 2 = 1. (2.71) The reader should aim to become familiar with such cosmetic rearrangements. 50
  • 67. CHAPTER 2. LINEAR ALGEBRA At this point, it can be helpful to rewrite the problem in terms of a single design matrix of examples, rather than as a sum over separate example vectors. This will allow us to use more compact notation. Let X ∈ Rm n × be the matrix defined by stacking all of the vectors describing the points, such that Xi,: = x( ) i  . We can now rewrite the problem as d∗ = arg min d || − X Xdd ||2 F subject to d d = 1. (2.72) Disregarding the constraint for the moment, we can simplify the Frobenius norm portion as follows: arg min d || − X Xdd ||2 F (2.73) = arg min d Tr  X Xdd −    X Xdd −   (2.74) (by equation ) 2.49 = arg min d Tr(X X X −  Xdd − dd X X dd +  X Xdd ) (2.75) = arg min d Tr(X X) Tr( − X Xdd ) Tr( − dd X  X) + Tr(dd X Xdd ) (2.76) = arg min d − Tr(X Xdd ) Tr( − dd X X) + Tr(dd X Xdd ) (2.77) (because terms not involving do not affect the ) d arg min = arg min d −2 Tr(X Xdd ) + Tr(dd X Xdd ) (2.78) (because we can cycle the order of the matrices inside a trace, equation ) 2.52 = arg min d −2 Tr(X Xdd ) + Tr(X Xdd dd ) (2.79) (using the same property again) At this point, we re-introduce the constraint: arg min d −2 Tr(X Xdd ) + Tr(X Xdd dd ) subject to d d = 1 (2.80) = arg min d −2 Tr(X Xdd ) + Tr(X Xdd  ) subject to d d = 1 (2.81) (due to the constraint) = arg min d − Tr(X Xdd ) subject to d d = 1 (2.82) 51
  • 68. CHAPTER 2. LINEAR ALGEBRA = arg max d Tr(X Xdd ) subject to d d = 1 (2.83) = arg max d Tr(d X Xd d ) subject to  d = 1 (2.84) This optimization problem may be solved using eigendecomposition. Specifically, the optimal d is given by the eigenvector of X X corresponding to the largest eigenvalue. This derivation is specific to the case of l = 1 and recovers only the first principal component. More generally, when we wish to recover a basis of principal components, the matrix D is given by the l eigenvectors corresponding to the largest eigenvalues. This may be shown using proof by induction. We recommend writing this proof as an exercise. Linear algebra is one of the fundamental mathematical disciplines that is necessary to understand deep learning. Another key area of mathematics that is ubiquitous in machine learning is probability theory, presented next. 52
  • 69. Chapter 3 Probability and Information Theory In this chapter, we describe probability theory and information theory. Probability theory is a mathematical framework for representing uncertain statements. It provides a means of quantifying uncertainty and axioms for deriving new uncertain statements. In artificial intelligence applications, we use probability theory in two major ways. First, the laws of probability tell us how AI systems should reason, so we design our algorithms to compute or approximate various expressions derived using probability theory. Second, we can use probability and statistics to theoretically analyze the behavior of proposed AI systems. Probability theory is a fundamental tool of many disciplines of science and engineering. We provide this chapter to ensure that readers whose background is primarily in software engineering with limited exposure to probability theory can understand the material in this book. While probability theory allows us to make uncertain statements and reason in the presence of uncertainty, information theory allows us to quantify the amount of uncertainty in a probability distribution. If you are already familiar with probability theory and information theory, you may wish to skip all of this chapter except for section , which describes the 3.14 graphs we use to describe structured probabilistic models for machine learning. If you have absolutely no prior experience with these subjects, this chapter should be sufficient to successfully carry out deep learning research projects, but we do suggest that you consult an additional resource, such as Jaynes 2003 ( ). 53
  • 70. CHAPTER 3. PROBABILITY AND INFORMATION THEORY 3.1 Why Probability? Many branches of computer science deal mostly with entities that are entirely deterministic and certain. A programmer can usually safely assume that a CPU will execute each machine instruction flawlessly. Errors in hardware do occur, but are rare enough that most software applications do not need to be designed to account for them. Given that many computer scientists and software engineers work in a relatively clean and certain environment, it can be surprising that machine learning makes heavy use of probability theory. This is because machine learning must always deal with uncertain quantities, and sometimes may also need to deal with stochastic (non-deterministic) quantities. Uncertainty and stochasticity can arise from many sources. Researchers have made compelling arguments for quantifying uncertainty using probability since at least the 1980s. Many of the arguments presented here are summarized from or inspired by Pearl 1988 ( ). Nearly all activities require some ability to reason in the presence of uncertainty. In fact, beyond mathematical statements that are true by definition, it is difficult to think of any proposition that is absolutely true or any event that is absolutely guaranteed to occur. There are three possible sources of uncertainty: 1. Inherent stochasticity in the system being modeled. For example, most interpretations of quantum mechanics describe the dynamics of subatomic particles as being probabilistic. We can also create theoretical scenarios that we postulate to have random dynamics, such as a hypothetical card game where we assume that the cards are truly shuffled into a random order. 2. Incomplete observability. Even deterministic systems can appear stochastic when we cannot observe all of the variables that drive the behavior of the system. For example, in the Monty Hall problem, a game show contestant is asked to choose between three doors and wins a prize held behind the chosen door. Two doors lead to a goat while a third leads to a car. The outcome given the contestant’s choice is deterministic, but from the contestant’s point of view, the outcome is uncertain. 3. Incomplete modeling. When we use a model that must discard some of the information we have observed, the discarded information results in uncertainty in the model’s predictions. For example, suppose we build a robot that can exactly observe the location of every object around it. If the 54
  • 71. CHAPTER 3. PROBABILITY AND INFORMATION THEORY robot discretizes space when predicting the future location of these objects, then the discretization makes the robot immediately become uncertain about the precise position of objects: each object could be anywhere within the discrete cell that it was observed to occupy. In many cases, it is more practical to use a simple but uncertain rule rather than a complex but certain one, even if the true rule is deterministic and our modeling system has the fidelity to accommodate a complex rule. For example, the simple rule “Most birds fly” is cheap to develop and is broadly useful, while a rule of the form, “Birds fly, except for very young birds that have not yet learned to fly, sick or injured birds that have lost the ability to fly, flightless species of birds including the cassowary, ostrich and kiwi. . .” is expensive to develop, maintain and communicate, and after all of this effort is still very brittle and prone to failure. While it should be clear that we need a means of representing and reasoning about uncertainty, it is not immediately obvious that probability theory can provide all of the tools we want for artificial intelligence applications. Probability theory was originally developed to analyze the frequencies of events. It is easy to see how probability theory can be used to study events like drawing a certain hand of cards in a game of poker. These kinds of events are often repeatable. When we say that an outcome has a probability p of occurring, it means that if we repeated the experiment (e.g., draw a hand of cards) infinitely many times, then proportion p of the repetitions would result in that outcome. This kind of reasoning does not seem immediately applicable to propositions that are not repeatable. If a doctor analyzes a patient and says that the patient has a 40% chance of having the flu, this means something very different—we can not make infinitely many replicas of the patient, nor is there any reason to believe that different replicas of the patient would present with the same symptoms yet have varying underlying conditions. In the case of the doctor diagnosing the patient, we use probability to represent a degree of belief, with 1 indicating absolute certainty that the patient has the flu and 0 indicating absolute certainty that the patient does not have the flu. The former kind of probability, related directly to the rates at which events occur, is known as frequentist probability, while the latter, related to qualitative levels of certainty, is known as Bayesian probability. If we list several properties that we expect common sense reasoning about uncertainty to have, then the only way to satisfy those properties is to treat Bayesian probabilities as behaving exactly the same as frequentist probabilities. For example, if we want to compute the probability that a player will win a poker game given that she has a certain set of cards, we use exactly the same formulas as when we compute the probability that a patient has a disease given that she 55
  • 72. CHAPTER 3. PROBABILITY AND INFORMATION THEORY has certain symptoms. For more details about why a small set of common sense assumptions implies that the same axioms must control both kinds of probability, see ( ). Ramsey 1926 Probability can be seen as the extension of logic to deal with uncertainty. Logic provides a set of formal rules for determining what propositions are implied to be true or false given the assumption that some other set of propositions is true or false. Probability theory provides a set of formal rules for determining the likelihood of a proposition being true given the likelihood of other propositions. 3.2 Random Variables A random variable is a variable that can take on different values randomly. We typically denote the random variable itself with a lower case letter in plain typeface, and the values it can take on with lower case script letters. For example, x1 and x2 are both possible values that the random variable x can take on. For vector-valued variables, we would write the random variable as x and one of its values as x. On its own, a random variable is just a description of the states that are possible; it must be coupled with a probability distribution that specifies how likely each of these states are. Random variables may be discrete or continuous. A discrete random variable is one that has a finite or countably infinite number of states. Note that these states are not necessarily the integers; they can also just be named states that are not considered to have any numerical value. A continuous random variable is associated with a real value. 3.3 Probability Distributions A probability distribution is a description of how likely a random variable or set of random variables is to take on each of its possible states. The way we describe probability distributions depends on whether the variables are discrete or continuous. 3.3.1 Discrete Variables and Probability Mass Functions A probability distribution over discrete variables may be described using a proba- bility mass function (PMF). We typically denote probability mass functions with a capital P. Often we associate each random variable with a different probability 56
  • 73. CHAPTER 3. PROBABILITY AND INFORMATION THEORY mass function and the reader must infer which probability mass function to use based on the identity of the random variable, rather than the name of the function; P P ( ) x is usually not the same as ( ) y . The probability mass function maps from a state of a random variable to the probability of that random variable taking on that state. The probability that x = x is denoted as P(x), with a probability of 1 indicating that x = x is certain and a probability of 0 indicating that x = x is impossible. Sometimes to disambiguate which PMF to use, we write the name of the random variable explicitly: P (x = x). Sometimes we define a variable first, then use ∼ notation to specify which distribution it follows later: x x . ∼ P ( ) Probability mass functions can act on many variables at the same time. Such a probability distribution over many variables is known as a joint probability distribution. P (x = x, y = y) denotes the probability that x = x and y = y simultaneously. We may also write for brevity. P x,y ( ) To be a probability mass function on a random variable x, a function P must satisfy the following properties: • The domain of must be the set of all possible states of x. P • ∀ ∈ x x,0 ≤ P(x) ≤ 1. An impossible event has probability and no state can 0 be less probable than that. Likewise, an event that is guaranteed to happen has probability , and no state can have a greater chance of occurring. 1 •  x∈x P(x) = 1. We refer to this property as being normalized. Without this property, we could obtain probabilities greater than one by computing the probability of one of many events occurring. For example, consider a single discrete random variable x with k different states. We can place a uniform distribution on x—that is, make each of its states equally likely—by setting its probability mass function to P x ( = x i) = 1 k (3.1) for all i. We can see that this fits the requirements for a probability mass function. The value 1 k is positive because is a positive integer. We also see that k  i P x ( = x i) =  i 1 k = k k = 1, (3.2) so the distribution is properly normalized. 57
  • 74. CHAPTER 3. PROBABILITY AND INFORMATION THEORY 3.3.2 Continuous Variables and Probability Density Functions When working with continuous random variables, we describe probability distri- butions using a probability density function (PDF) rather than a probability mass function. To be a probability density function, a function p must satisfy the following properties: • The domain of must be the set of all possible states of x. p • ∀ ∈ ≥ ≤ x x,p x ( ) 0 ( ) . p Note that we do not require x 1. •  p x dx ( ) = 1. A probability density function p(x) does not give the probability of a specific state directly, instead the probability of landing inside an infinitesimal region with volume is given by . δx p x δx ( ) We can integrate the density function to find the actual probability mass of a set of points. Specifically, the probability that x lies in some set S is given by the integral of p(x) over that set. In the univariate example, the probability that x lies in the interval is given by [ ] a, b  [ ] a,b p x dx ( ) . For an example of a probability density function corresponding to a specific probability density over a continuous random variable, consider a uniform distribu- tion on an interval of the real numbers. We can do this with a function u(x;a,b), where a and b are the endpoints of the interval, with b > a. The “;” notation means “parametrized by”; we consider x to be the argument of the function, while a and b are parameters that define the function. To ensure that there is no probability mass outside the interval, we say u(x;a,b) = 0 for all x ∈ [a,b] [ . Within a,b], u x a, b ( ; ) = 1 b a − . We can see that this is nonnegative everywhere. Additionally, it integrates to 1. We often denote that x follows the uniform distribution on [a,b] by writing x . ∼ U a,b ( ) 3.4 Marginal Probability Sometimes we know the probability distribution over a set of variables and we want to know the probability distribution over just a subset of them. The probability distribution over the subset is known as the distribution. marginal probability For example, suppose we have discrete random variables x and y, and we know P , (x y . We can find x with the : ) P( ) sum rule ∀ ∈ x x x ,P ( = ) = x  y P x, y . ( = x y = ) (3.3) 58
  • 75. CHAPTER 3. PROBABILITY AND INFORMATION THEORY The name “marginal probability” comes from the process of computing marginal probabilities on paper. When the values of P(x y , ) are written in a grid with different values of x in rows and different values of y in columns, it is natural to sum across a row of the grid, then write P(x) in the margin of the paper just to the right of the row. For continuous variables, we need to use integration instead of summation: p x ( ) =  p x,y dy. ( ) (3.4) 3.5 Conditional Probability In many cases, we are interested in the probability of some event, given that some other event has happened. This is called a conditional probability. We denote the conditional probability that y = y given x = x as P(y = y | x = x). This conditional probability can be computed with the formula P y x ( = y | x = ) = P y, x ( = y x = ) P x ( = x ) . (3.5) The conditional probability is only defined when P(x = x) > 0. We cannot compute the conditional probability conditioned on an event that never happens. It is important not to confuse conditional probability with computing what would happen if some action were undertaken. The conditional probability that a person is from Germany given that they speak German is quite high, but if a randomly selected person is taught to speak German, their country of origin does not change. Computing the consequences of an action is called making an intervention query. Intervention queries are the domain of causal modeling, which we do not explore in this book. 3.6 The Chain Rule of Conditional Probabilities Any joint probability distribution over many random variables may be decomposed into conditional distributions over only one variable: P (x(1) ,. .. ,x( ) n ) = ( P x(1) )Πn i=2P (x( ) i | x(1) ,. .. ,x( 1) i− ). (3.6) This observation is known as the chain rule or product rule of probability. It follows immediately from the definition of conditional probability in equation . 3.5 59
  • 76. CHAPTER 3. PROBABILITY AND INFORMATION THEORY For example, applying the definition twice, we get P , , P , P , (a b c) = (a b | c) (b c) P , P P (b c) = ( ) b c | ( ) c P , , P , P P . (a b c) = (a b | c) ( ) b c | ( ) c 3.7 Independence and Conditional Independence Two random variables x and y are independent if their probability distribution can be expressed as a product of two factors, one involving only x and one involving only y: ∀ ∈ ∈ x x,y y x y x y (3.7) , p( = x, = ) = ( y p = ) ( x p = ) y . Two random variables x and y areconditionally independent given a random variable z if the conditional probability distribution over x and y factorizes in this way for every value of z: ∀ ∈ ∈ ∈ | | | x x,y y,z z x y , p( = x, = y z x = ) = ( z p = x z y = ) ( z p = y z = ) z . (3.8) We can denote independence and conditional independence with compact notation: x y ⊥ means that x and y are independent, while x y z ⊥ | means that x and y are conditionally independent given z. 3.8 Expectation, Variance and Covariance The expectation or expected value of some function f(x) with respect to a probability distribution P (x) is the average or mean value that f takes on when x is drawn from . For discrete variables this can be computed with a summation: P Ex∼P [ ( )] = f x  x P x f x , ( ) ( ) (3.9) while for continuous variables, it is computed with an integral: Ex∼p[ ( )] = f x  p x f x dx. ( ) ( ) (3.10) 60
  • 77. CHAPTER 3. PROBABILITY AND INFORMATION THEORY When the identity of the distribution is clear from the context, we may simply write the name of the random variable that the expectation is over, as in Ex[f(x)]. If it is clear which random variable the expectation is over, we may omit the subscript entirely, as in E[f(x)]. By default, we can assume that E[·] averages over the values of all the random variables inside the brackets. Likewise, when there is no ambiguity, we may omit the square brackets. Expectations are linear, for example, Ex[ ( ) + ( )] = αf x βg x αEx[ ( )] + f x βEx[ ( )] g x , (3.11) when and are not dependent on . α β x The variance gives a measure of how much the values of a function of a random variable x vary as we sample different values of x from its probability distribution: Var( ( )) = f x E  ( ( ) [ ( )]) f x − E f x 2  . (3.12) When the variance is low, the values of f(x) cluster near their expected value. The square root of the variance is known as the . standard deviation The covariance gives some sense of how much two values are linearly related to each other, as well as the scale of these variables: Cov( ( ) ( )) = [( ( ) [ ( )])( ( ) [ ( )])] f x , g y E f x − E f x g y − E g y . (3.13) High absolute values of the covariance mean that the values change very much and are both far from their respective means at the same time. If the sign of the covariance is positive, then both variables tend to take on relatively high values simultaneously. If the sign of the covariance is negative, then one variable tends to take on a relatively high value at the times that the other takes on a relatively low value and vice versa. Other measures such as correlation normalize the contribution of each variable in order to measure only how much the variables are related, rather than also being affected by the scale of the separate variables. The notions of covariance and dependence are related, but are in fact distinct concepts. They are related because two variables that are independent have zero covariance, and two variables that have non-zero covariance are dependent. How- ever, independence is a distinct property from covariance. For two variables to have zero covariance, there must be no linear dependence between them. Independence is a stronger requirement than zero covariance, because independence also excludes nonlinear relationships. It is possible for two variables to be dependent but have zero covariance. For example, suppose we first sample a real number x from a uniform distribution over the interval [−1, 1]. We next sample a random variable 61
  • 78. CHAPTER 3. PROBABILITY AND INFORMATION THEORY s. With probability 1 2 , we choose the value of s to be . Otherwise, we choose 1 the value of s to be −1. We can then generate a random variable y by assigning y = sx. Clearly, x and y are not independent, because x completely determines the magnitude of . However, y Cov( ) = 0 x, y . The covariance matrix of a random vector x ∈ Rn is an n n × matrix, such that Cov( ) x i,j = Cov(xi,xj). (3.14) The diagonal elements of the covariance give the variance: Cov(xi,xi) = Var(xi ). (3.15) 3.9 Common Probability Distributions Several simple probability distributions are useful in many contexts in machine learning. 3.9.1 Bernoulli Distribution The Bernoulli distribution is a distribution over a single binary random variable. It is controlled by a single parameter φ ∈ [0,1], which gives the probability of the random variable being equal to 1. It has the following properties: P φ ( = 1) = x (3.16) P φ ( = 0) = 1 x − (3.17) P x φ ( = x ) = x (1 ) − φ 1−x (3.18) Ex[ ] = x φ (3.19) Varx( ) = (1 ) x φ − φ (3.20) 3.9.2 Multinoulli Distribution The multinoulli or categorical distribution is a distribution over a single discrete variable with k different states, where k is finite.1 The multinoulli distribution is 1 “Multinoulli” is a term that was recently coined by Gustavo Lacerdo and popularized by Murphy 2012 ( ). The multinoulli distribution is a special case of the multinomial distribution. A multinomial distribution is the distribution over vectors in {0,. . ., n}k representing how many times each of the k categories is visited when n samples are drawn from a multinoulli distribution. Many texts use the term “multinomial” to refer to multinoulli distributions without clarifying that they refer only to the case. n = 1 62
  • 79. CHAPTER 3. PROBABILITY AND INFORMATION THEORY parametrized by a vector p ∈ [0,1]k−1 , where pi gives the probability of the i-th state. The final, k-th state’s probability is given by 1− 1 p. Note that we must constrain 1 p ≤ 1. Multinoulli distributions are often used to refer to distributions over categories of objects, so we do not usually assume that state 1 has numerical value 1, etc. For this reason, we do not usually need to compute the expectation or variance of multinoulli-distributed random variables. The Bernoulli and multinoulli distributions are sufficient to describe any distri- bution over their domain. They are able to describe any distribution over their domain not so much because they are particularly powerful but rather because their domain is simple; they model discrete variables for which it is feasible to enumerate all of the states. When dealing with continuous variables, there are uncountably many states, so any distribution described by a small number of parameters must impose strict limits on the distribution. 3.9.3 Gaussian Distribution The most commonly used distribution over real numbers is the normal distribu- tion, also known as the : Gaussian distribution N ( ; x µ, σ2 ) =  1 2πσ2 exp  − 1 2σ2 ( ) x µ − 2  . (3.21) See figure for a plot of the density function. 3.1 The two parameters µ ∈ R and σ ∈ (0,∞) control the normal distribution. The parameter µ gives the coordinate of the central peak. This is also the mean of the distribution: E[x] = µ. The standard deviation of the distribution is given by σ, and the variance by σ2. When we evaluate the PDF, we need to square and invert σ. When we need to frequently evaluate the PDF with different parameter values, a more efficient way of parametrizing the distribution is to use a parameter β ∈ (0,∞) to control the precision or inverse variance of the distribution: N( ; x µ, β−1 ) =  β 2π exp  − 1 2 β x µ ( − )2  . (3.22) Normal distributions are a sensible choice for many applications. In the absence of prior knowledge about what form a distribution over the real numbers should take, the normal distribution is a good default choice for two major reasons. 63
  • 80. CHAPTER 3. PROBABILITY AND INFORMATION THEORY − − − − 2 0 . 1 5 . 1 0 . 0 5 0 0 0 5 1 0 1 5 2 0 . . . . . . 0 00 . 0 05 . 0 10 . 0 15 . 0 20 . 0 25 . 0 30 . 0 35 . 0 40 . p(x) Maximum at = x µ Inflection points at x µ σ = ± Figure 3.1: The normal distribution: The normal distribution N (x; µ, σ2 ) exhibits a classic “bell curve” shape, with the x coordinate of its central peak given by µ, and the width of its peak controlled by σ. In this example, we depict the standard normal distribution, with and . µ = 0 σ = 1 First, many distributions we wish to model are truly close to being normal distributions. The central limit theorem shows that the sum of many indepen- dent random variables is approximately normally distributed. This means that in practice, many complicated systems can be modeled successfully as normally distributed noise, even if the system can be decomposed into parts with more structured behavior. Second, out of all possible probability distributions with the same variance, the normal distribution encodes the maximum amount of uncertainty over the real numbers. We can thus think of the normal distribution as being the one that inserts the least amount of prior knowledge into a model. Fully developing and justifying this idea requires more mathematical tools, and is postponed to section . 19.4.2 The normal distribution generalizes to Rn, in which case it is known as the multivariate normal distribution. It may be parametrized with a positive definite symmetric matrix : Σ N ( ; ) = x µ, Σ  1 (2 ) π ndet( ) Σ exp  − 1 2 ( ) x µ −  Σ−1 ( ) x µ −  . (3.23) 64
  • 81. CHAPTER 3. PROBABILITY AND INFORMATION THEORY The parameter µ still gives the mean of the distribution, though now it is vector-valued. The parameter Σ gives the covariance matrix of the distribution. As in the univariate case, when we wish to evaluate the PDF several times for many different values of the parameters, the covariance is not a computationally efficient way to parametrize the distribution, since we need to invert Σ to evaluate the PDF. We can instead use a : precision matrix β N ( ; x µ β , −1 ) =  det( ) β (2 ) π n exp  − 1 2 ( ) x µ −  β x µ ( − )  . (3.24) We often fix the covariance matrix to be a diagonal matrix. An even simpler version is the isotropic Gaussian distribution, whose covariance matrix is a scalar times the identity matrix. 3.9.4 Exponential and Laplace Distributions In the context of deep learning, we often want to have a probability distribution with a sharp point at x = 0. To accomplish this, we can use the exponential distribution: p x λ λ ( ; ) = 1x≥0 exp ( ) −λx . (3.25) The exponential distribution uses the indicator function 1x≥0 to assign probability zero to all negative values of . x A closely related probability distribution that allows us to place a sharp peak of probability mass at an arbitrary point is the µ Laplace distribution Laplace( ; ) = x µ, γ 1 2γ exp  − | − | x µ γ  . (3.26) 3.9.5 The Dirac Distribution and Empirical Distribution In some cases, we wish to specify that all of the mass in a probability distribution clusters around a single point. This can be accomplished by defining a PDF using the Dirac delta function, : δ x ( ) p x δ x µ . ( ) = ( − ) (3.27) The Dirac delta function is defined such that it is zero-valued everywhere except 0, yet integrates to 1. The Dirac delta function is not an ordinary function that associates each value x with a real-valued output, instead it is a different kind of 65
  • 82. CHAPTER 3. PROBABILITY AND INFORMATION THEORY mathematical object called a generalized function that is defined in terms of its properties when integrated. We can think of the Dirac delta function as being the limit point of a series of functions that put less and less mass on all points other than zero. By defining p(x) to be δ shifted by −µ we obtain an infinitely narrow and infinitely high peak of probability mass where . x µ = A common use of the Dirac delta distribution is as a component of an empirical distribution, p̂( ) = x 1 m m  i=1 δ(x x − ( ) i ) (3.28) which puts probability mass 1 m on each of the m points x(1) ,. .. ,x( ) m forming a given dataset or collection of samples. The Dirac delta distribution is only necessary to define the empirical distribution over continuous variables. For discrete variables, the situation is simpler: an empirical distribution can be conceptualized as a multinoulli distribution, with a probability associated to each possible input value that is simply equal to the empirical frequency of that value in the training set. We can view the empirical distribution formed from a dataset of training examples as specifying the distribution that we sample from when we train a model on this dataset. Another important perspective on the empirical distribution is that it is the probability density that maximizes the likelihood of the training data (see section ). 5.5 3.9.6 Mixtures of Distributions It is also common to define probability distributions by combining other simpler probability distributions. One common way of combining distributions is to construct a mixture distribution. A mixture distribution is made up of several component distributions. On each trial, the choice of which component distribution generates the sample is determined by sampling a component identity from a multinoulli distribution: P( ) = x  i P i P i ( = c ) ( = x c | ) (3.29) where c is the multinoulli distribution over component identities. P( ) We have already seen one example of a mixture distribution: the empirical distribution over real-valued variables is a mixture distribution with one Dirac component for each training example. 66
  • 83. CHAPTER 3. PROBABILITY AND INFORMATION THEORY The mixture model is one simple strategy for combining probability distributions to create a richer distribution. In chapter , we explore the art of building complex 16 probability distributions from simple ones in more detail. The mixture model allows us to briefly glimpse a concept that will be of paramount importance later—the latent variable. A latent variable is a random variable that we cannot observe directly. The component identity variable c of the mixture model provides an example. Latent variables may be related to x through the joint distribution, in this case, P(x c , ) = P(x c | )P (c). The distribution P (c) over the latent variable and the distribution P(x c | ) relating the latent variables to the visible variables determines the shape of the distribution P (x) even though it is possible to describe P(x) without reference to the latent variable. Latent variables are discussed further in section . 16.5 A very powerful and common type of mixture model is the Gaussian mixture model, in which the components p(x | c = i) are Gaussians. Each component has a separately parametrized mean µ( ) i and covariance Σ( ) i . Some mixtures can have more constraints. For example, the covariances could be shared across components via the constraint Σ( ) i = Σ, i ∀ . As with a single Gaussian distribution, the mixture of Gaussians might constrain the covariance matrix for each component to be diagonal or isotropic. In addition to the means and covariances, the parameters of a Gaussian mixture specify the prior probability αi = P(c = i) given to each component i. The word “prior” indicates that it expresses the model’s beliefs about c before it has observed x. By comparison, P(c | x) is a posterior probability, because it is computed after observation of x. A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific, non-zero amount of error by a Gaussian mixture model with enough components. Figure shows samples from a Gaussian mixture model. 3.2 3.10 Useful Properties of Common Functions Certain functions arise often while working with probability distributions, especially the probability distributions used in deep learning models. One of these functions is the : logistic sigmoid σ x ( ) = 1 1 + exp( ) −x . (3.30) The logistic sigmoid is commonly used to produce the φ parameter of a Bernoulli 67
  • 84. CHAPTER 3. PROBABILITY AND INFORMATION THEORY x1 x 2 Figure 3.2: Samples from a Gaussian mixture model. In this example, there are three components. From left to right, the first component has an isotropic covariance matrix, meaning it has the same amount of variance in each direction. The second has a diagonal covariance matrix, meaning it can control the variance separately along each axis-aligned direction. This example has more variance along thex2 axis than along the x1 axis. The third component has a full-rank covariance matrix, allowing it to control the variance separately along an arbitrary basis of directions. distribution because its range is (0,1), which lies within the valid range of values for the φ parameter. See figure for a graph of the sigmoid function. The 3.3 sigmoid function saturates when its argument is very positive or very negative, meaning that the function becomes very flat and insensitive to small changes in its input. Another commonly encountered function is the softplus function ( , Dugas et al. 2001): ζ x x . ( ) = log (1 + exp( )) (3.31) The softplus function can be useful for producing the β or σ parameter of a normal distribution because its range is (0,∞). It also arises commonly when manipulating expressions involving sigmoids. The name of the softplus function comes from the fact that it is a smoothed or “softened” version of x+ = max(0 ) ,x . (3.32) See figure for a graph of the softplus function. 3.4 The following properties are all useful enough that you may wish to memorize them: 68
  • 85. CHAPTER 3. PROBABILITY AND INFORMATION THEORY − − 10 5 0 5 10 0 0 . 0 2 . 0 4 . 0 6 . 0 8 . 1 0 . σ x ( ) Figure 3.3: The logistic sigmoid function. − − 10 5 0 5 10 0 2 4 6 8 10 ζ x ( ) Figure 3.4: The softplus function. 69
  • 86. CHAPTER 3. PROBABILITY AND INFORMATION THEORY σ x ( ) = exp( ) x exp( ) + exp(0) x (3.33) d dx σ x σ x σ x ( ) = ( )(1 − ( )) (3.34) 1 ( ) = ( ) − σ x σ −x (3.35) log ( ) = ( ) σ x −ζ −x (3.36) d dx ζ x σ x ( ) = ( ) (3.37) ∀ ∈ x (0 1) , , σ−1 ( ) = log x  x 1 − x  (3.38) ∀x > , ζ 0 −1 ( ) = log (exp( ) 1) x x − (3.39) ζ x ( ) =  x −∞ σ y dy ( ) (3.40) ζ x ζ x x ( ) − (− ) = (3.41) The function σ−1 (x) is called the logit in statistics, but this term is more rarely used in machine learning. Equation provides extra justification for the name “softplus.” The softplus 3.41 function is intended as a smoothed version of the positive part function, x+ = max{0,x}. The positive part function is the counterpart of the negative part function, x− = max{0, x − }. To obtain a smooth function that is analogous to the negative part, one can use ζ(−x). Just as x can be recovered from its positive part and negative part via the identity x+ − x− = x, it is also possible to recover x using the same relationship between and , as shown in equation . ζ x ( ) ζ x (− ) 3.41 3.11 Bayes’ Rule We often find ourselves in a situation where we know P(y x | ) and need to know P (x y | ). Fortunately, if we also know P (x), we can compute the desired quantity using Bayes’ rule: P( ) = x y | P P ( ) x ( ) y x | P( ) y . (3.42) Note that while P (y) appears in the formula, it is usually feasible to compute P( ) = y  x P x P x P (y | ) ( ), so we do not need to begin with knowledge of ( ) y . 70
  • 87. CHAPTER 3. PROBABILITY AND INFORMATION THEORY Bayes’ rule is straightforward to derive from the definition of conditional probability, but it is useful to know the name of this formula since many texts refer to it by name. It is named after the Reverend Thomas Bayes, who first discovered a special case of the formula. The general version presented here was independently discovered by Pierre-Simon Laplace. 3.12 Technical Details of Continuous Variables A proper formal understanding of continuous random variables and probability density functions requires developing probability theory in terms of a branch of mathematics known as measure theory. Measure theory is beyond the scope of this textbook, but we can briefly sketch some of the issues that measure theory is employed to resolve. In section , we saw that the probability of a continuous vector-valued 3.3.2 x lying in some set S is given by the integral of p(x) over the set S. Some choices of set S can produce paradoxes. For example, it is possible to construct two sets S1 and S2 such that p(x ∈ S1) + p(x ∈ S2) > 1 but S1 ∩ S2 = ∅. These sets are generally constructed making very heavy use of the infinite precision of real numbers, for example by making fractal-shaped sets or sets that are defined by transforming the set of rational numbers.2 One of the key contributions of measure theory is to provide a characterization of the set of sets that we can compute the probability of without encountering paradoxes. In this book, we only integrate over sets with relatively simple descriptions, so this aspect of measure theory never becomes a relevant concern. For our purposes, measure theory is more useful for describing theorems that apply to most points in Rn but do not apply to some corner cases. Measure theory provides a rigorous way of describing that a set of points is negligibly small. Such a set is said to have measure zero. We do not formally define this concept in this textbook. For our purposes, it is sufficient to understand the intuition that a set of measure zero occupies no volume in the space we are measuring. For example, within R2 , a line has measure zero, while a filled polygon has positive measure. Likewise, an individual point has measure zero. Any union of countably many sets that each have measure zero also has measure zero (so the set of all the rational numbers has measure zero, for instance). Another useful term from measure theory is almost everywhere. A property that holds almost everywhere holds throughout all of space except for on a set of 2 The Banach-Tarski theorem provides a fun example of such sets. 71
  • 88. CHAPTER 3. PROBABILITY AND INFORMATION THEORY measure zero. Because the exceptions occupy a negligible amount of space, they can be safely ignored for many applications. Some important results in probability theory hold for all discrete values but only hold “almost everywhere” for continuous values. Another technical detail of continuous variables relates to handling continuous random variables that are deterministic functions of one another. Suppose we have two random variables, x and y, such that y = g(x), where g is an invertible, con- tinuous, differentiable transformation. One might expect that py (y) = px(g−1 (y)). This is actually not the case. As a simple example, suppose we have scalar random variables x and y. Suppose y = x 2 and x ∼ U(0, 1). If we use the rule py(y) = px (2y) then py will be 0 everywhere except the interval [0, 1 2] 1 , and it will be on this interval. This means  py( ) = y dy 1 2 , (3.43) which violates the definition of a probability distribution. This is a common mistake. The problem with this approach is that it fails to account for the distortion of space introduced by the function g. Recall that the probability of x lying in an infinitesimally small region with volume δx is given by p(x)δx. Since g can expand or contract space, the infinitesimal volume surrounding x in x space may have different volume in space. y To see how to correct the problem, we return to the scalar case. We need to preserve the property |py( ( )) = g x dy| |px( ) x dx . | (3.44) Solving from this, we obtain py( ) = y px (g−1 ( )) y     ∂x ∂y     (3.45) or equivalently px( ) = x py( ( )) g x     ∂g x ( ) ∂x    . (3.46) In higher dimensions, the derivative generalizes to the determinant of the Jacobian matrix—the matrix with Ji,j = ∂xi ∂yj . Thus, for real-valued vectors and , x y px( ) = x py( ( )) g x    det  ∂g( ) x ∂x    . (3.47) 72
  • 89. CHAPTER 3. PROBABILITY AND INFORMATION THEORY 3.13 Information Theory Information theory is a branch of applied mathematics that revolves around quantifying how much information is present in a signal. It was originally invented to study sending messages from discrete alphabets over a noisy channel, such as communication via radio transmission. In this context, information theory tells how to design optimal codes and calculate the expected length of messages sampled from specific probability distributions using various encoding schemes. In the context of machine learning, we can also apply information theory to continuous variables where some of these message length interpretations do not apply. This field is fundamental to many areas of electrical engineering and computer science. In this textbook, we mostly use a few key ideas from information theory to characterize probability distributions or quantify similarity between probability distributions. For more detail on information theory, see Cover and Thomas 2006 MacKay ( ) or ( ). 2003 The basic intuition behind information theory is that learning that an unlikely event has occurred is more informative than learning that a likely event has occurred. A message saying “the sun rose this morning” is so uninformative as to be unnecessary to send, but a message saying “there was a solar eclipse this morning” is very informative. We would like to quantify information in a way that formalizes this intuition. Specifically, • Likely events should have low information content, and in the extreme case, events that are guaranteed to happen should have no information content whatsoever. • Less likely events should have higher information content. • Independent events should have additive information. For example, finding out that a tossed coin has come up as heads twice should convey twice as much information as finding out that a tossed coin has come up as heads once. In order to satisfy all three of these properties, we define the self-information of an event x to be = x I x P x . ( ) = log − ( ) (3.48) In this book, we always use log to mean the natural logarithm, with base e. Our definition of I (x) is therefore written in units of nats. One nat is the amount of 73
  • 90. CHAPTER 3. PROBABILITY AND INFORMATION THEORY information gained by observing an event of probability 1 e. Other texts use base-2 logarithms and units called bits or shannons; information measured in bits is just a rescaling of information measured in nats. When x is continuous, we use the same definition of information by analogy, but some of the properties from the discrete case are lost. For example, an event with unit density still has zero information, despite not being an event that is guaranteed to occur. Self-information deals only with a single outcome. We can quantify the amount of uncertainty in an entire probability distribution using the Shannon entropy: H( ) = x Ex∼P [ ( )] = I x −Ex∼P [log ( )] P x . (3.49) also denoted H(P). In other words, the Shannon entropy of a distribution is the expected amount of information in an event drawn from that distribution. It gives a lower bound on the number of bits (if the logarithm is base 2, otherwise the units are different) needed on average to encode symbols drawn from a distribution P. Distributions that are nearly deterministic (where the outcome is nearly certain) have low entropy; distributions that are closer to uniform have high entropy. See figure for a demonstration. When 3.5 x is continuous, the Shannon entropy is known as the differential entropy. If we have two separate probability distributions P (x) and Q(x) over the same random variable x, we can measure how different these two distributions are using the Kullback-Leibler (KL) divergence: DKL( ) = P Q  Ex∼P  log P x ( ) Q x ( )  = Ex∼P [log ( ) log ( )] P x − Q x . (3.50) In the case of discrete variables, it is the extra amount of information (measured in bits if we use the base logarithm, but in machine learning we usually use nats 2 and the natural logarithm) needed to send a message containing symbols drawn from probability distribution P, when we use a code that was designed to minimize the length of messages drawn from probability distribution . Q The KL divergence has many useful properties, most notably that it is non- negative. The KL divergence is 0 if and only if P and Q are the same distribution in the case of discrete variables, or equal “almost everywhere” in the case of continuous variables. Because the KL divergence is non-negative and measures the difference between two distributions, it is often conceptualized as measuring some sort of distance between these distributions. However, it is not a true distance measure because it is not symmetric: DKL(P Q  ) = DKL(Q P  ) for some P and Q. This 74
  • 91. CHAPTER 3. PROBABILITY AND INFORMATION THEORY 0 0 0 2 0 4 0 6 0 8 1 0 . . . . . . 0 0 . 0 1 . 0 2 . 0 3 . 0 4 . 0 5 . 0 6 . 0 7 . Shannon entropy in nats Figure 3.5: This plot shows how distributions that are closer to deterministic have low Shannon entropy while distributions that are close to uniform have high Shannon entropy. On the horizontal axis, we plotp, the probability of a binary random variable being equal to . The entropy is given by 1 (p− 1)log(1 −p)−p p log . When p is near 0, the distribution is nearly deterministic, because the random variable is nearly always 0. Whenp is near 1, the distribution is nearly deterministic, because the random variable is nearly always 1. When p = 0.5, the entropy is maximal, because the distribution is uniform over the two outcomes. asymmetry means that there are important consequences to the choice of whether to use DKL( ) P Q  or DKL( ) Q P  . See figure for more detail. 3.6 A quantity that is closely related to the KL divergence is the cross-entropy H(P,Q) = H(P ) + DKL (P Q  ), which is similar to the KL divergence but lacking the term on the left: H P, Q ( ) = −Ex∼P log ( ) Q x . (3.51) Minimizing the cross-entropy with respect to Q is equivalent to minimizing the KL divergence, because does not participate in the omitted term. Q When computing many of these quantities, it is common to encounter expres- sions of the form 0 log 0. By convention, in the context of information theory, we treat these expressions as limx→0 x x log = 0. 3.14 Structured Probabilistic Models Machine learning algorithms often involve probability distributions over a very large number of random variables. Often, these probability distributions involve direct interactions between relatively few variables. Using a single function to 75
  • 92. CHAPTER 3. PROBABILITY AND INFORMATION THEORY x Probability Density q∗ = argminqDKL( ) p q  p x ( ) q∗ ( ) x x Probability Density q∗ = argminqDKL( ) q p  p( ) x q∗ ( ) x Figure 3.6: The KL divergence is asymmetric. Suppose we have a distributionp(x) and wish to approximate it with another distribution q(x). We have the choice of minimizing either DKL(p q  ) or DKL(q p  ). We illustrate the effect of this choice using a mixture of two Gaussians for p, and a single Gaussian for q. The choice of which direction of the KL divergence to use is problem-dependent. Some applications require an approximation that usually places high probability anywhere that the true distribution places high probability, while other applications require an approximation that rarely places high probability anywhere that the true distribution places low probability. The choice of the direction of the KL divergence reflects which of these considerations takes priority for each application. (Left)The effect of minimizing DKL(p q  ). In this case, we select a q that has high probability where p has high probability. When p has multiple modes, q chooses to blur the modes together, in order to put high probability mass on all of them. (Right)The effect of minimizing DKL(q p  ). In this case, we select a q that has low probability where p has low probability. When p has multiple modes that are sufficiently widely separated, as in this figure, the KL divergence is minimized by choosing a single mode, in order to avoid putting probability mass in the low-probability areas between modes ofp. Here, we illustrate the outcome when q is chosen to emphasize the left mode. We could also have achieved an equal value of the KL divergence by choosing the right mode. If the modes are not separated by a sufficiently strong low probability region, then this direction of the KL divergence can still choose to blur the modes. 76
  • 93. CHAPTER 3. PROBABILITY AND INFORMATION THEORY describe the entire joint probability distribution can be very inefficient (both computationally and statistically). Instead of using a single function to represent a probability distribution, we can split a probability distribution into many factors that we multiply together. For example, suppose we have three random variables: a, b and c. Suppose that a influences the value of b and b influences the value of c, but that a and c are independent given b. We can represent the probability distribution over all three variables as a product of probability distributions over two variables: p , , p p p . (a b c) = ( ) a ( ) b a | ( ) c b | (3.52) These factorizations can greatly reduce the number of parameters needed to describe the distribution. Each factor uses a number of parameters that is exponential in the number of variables in the factor. This means that we can greatly reduce the cost of representing a distribution if we are able to find a factorization into distributions over fewer variables. We can describe these kinds of factorizations using graphs. Here we use the word “graph” in the sense of graph theory: a set of vertices that may be connected to each other with edges. When we represent the factorization of a probability distribution with a graph, we call it a structured probabilistic model or graphical model. There are two main kinds of structured probabilistic models: directed and undirected. Both kinds of graphical models use a graph G in which each node in the graph corresponds to a random variable, and an edge connecting two random variables means that the probability distribution is able to represent direct interactions between those two random variables. Directed models use graphs with directed edges, and they represent fac- torizations into conditional probability distributions, as in the example above. Specifically, a directed model contains one factor for every random variable xi in the distribution, and that factor consists of the conditional distribution over xi given the parents of xi, denoted PaG(xi): p( ) = x  i p (xi | PaG (xi)). (3.53) See figure for an example of a directed graph and the factorization of probability 3.7 distributions it represents. Undirected models use graphs with undirected edges, and they represent factorizations into a set of functions; unlike in the directed case, these functions 77
  • 94. CHAPTER 3. PROBABILITY AND INFORMATION THEORY a a c c b b e e d d Figure 3.7: A directed graphical model over random variables a, b, c, d and e. This graph corresponds to probability distributions that can be factored as p , , , , p p p , p p . (a b c d e) = ( ) a ( ) b a | (c a | b) ( ) d b | ( ) e c | (3.54) This graph allows us to quickly see some properties of the distribution. For example,a and c interact directly, but a and e interact only indirectly via c. are usually not probability distributions of any kind. Any set of nodes that are all connected to each other in G is called a clique. Each clique C( ) i in an undirected model is associated with a factor φ( ) i (C( ) i ). These factors are just functions, not probability distributions. The output of each factor must be non-negative, but there is no constraint that the factor must sum or integrate to 1 like a probability distribution. The probability of a configuration of random variables is proportional to the product of all of these factors—assignments that result in larger factor values are more likely. Of course, there is no guarantee that this product will sum to 1. We therefore divide by a normalizing constant Z, defined to be the sum or integral over all states of the product of the φ functions, in order to obtain a normalized probability distribution: p( ) = x 1 Z  i φ( ) i  C( ) i  . (3.55) See figure for an example of an undirected graph and the factorization of 3.8 probability distributions it represents. Keep in mind that these graphical representations of factorizations are a language for describing probability distributions. They are not mutually exclusive families of probability distributions. Being directed or undirected is not a property of a probability distribution; it is a property of a particular description of a 78
  • 95. CHAPTER 3. PROBABILITY AND INFORMATION THEORY a a c c b b e e d d Figure 3.8: An undirected graphical model over random variablesa, b, c, d and e. This graph corresponds to probability distributions that can be factored as p , , , , (a b c d e) = 1 Z φ(1) ( ) a b c , , φ(2) ( ) b d , φ(3) ( ) c e , . (3.56) This graph allows us to quickly see some properties of the distribution. For example,a and c interact directly, but a and e interact only indirectly via c. probability distribution, but any probability distribution may be described in both ways. Throughout parts and of this book, we will use structured probabilistic I II models merely as a language to describe which direct probabilistic relationships different machine learning algorithms choose to represent. No further understanding of structured probabilistic models is needed until the discussion of research topics, in part , where we will explore structured probabilistic models in much greater III detail. This chapter has reviewed the basic concepts of probability theory that are most relevant to deep learning. One more set of fundamental mathematical tools remains: numerical methods. 79
  • 96. Chapter 4 Numerical Computation Machine learning algorithms usually require a high amount of numerical compu- tation. This typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula providing a symbolic expression for the correct so- lution. Common operations include optimization (finding the value of an argument that minimizes or maximizes a function) and solving systems of linear equations. Even just evaluating a mathematical function on a digital computer can be difficult when the function involves real numbers, which cannot be represented precisely using a finite amount of memory. 4.1 Overflow and Underflow The fundamental difficulty in performing continuous math on a digital computer is that we need to represent infinitely many real numbers with a finite number of bit patterns. This means that for almost all real numbers, we incur some approximation error when we represent the number in the computer. In many cases, this is just rounding error. Rounding error is problematic, especially when it compounds across many operations, and can cause algorithms that work in theory to fail in practice if they are not designed to minimize the accumulation of rounding error. One form of rounding error that is particularly devastating is underflow. Underflow occurs when numbers near zero are rounded to zero. Many functions behave qualitatively differently when their argument is zero rather than a small positive number. For example, we usually want to avoid division by zero (some 80
  • 97. CHAPTER 4. NUMERICAL COMPUTATION software environments will raise exceptions when this occurs, others will return a result with a placeholder not-a-number value) or taking the logarithm of zero (this is usually treated as −∞, which then becomes not-a-number if it is used for many further arithmetic operations). Another highly damaging form of numerical error is overflow. Overflow occurs when numbers with large magnitude are approximated as ∞ or −∞. Further arithmetic will usually change these infinite values into not-a-number values. One example of a function that must be stabilized against underflow and overflow is the softmax function. The softmax function is often used to predict the probabilities associated with a multinoulli distribution. The softmax function is defined to be softmax( ) x i = exp(xi) n j=1 exp(xj) . (4.1) Consider what happens when all of the xi are equal to some constant c. Analytically, we can see that all of the outputs should be equal to 1 n. Numerically, this may not occur when c has large magnitude. If c is very negative, then exp(c) will underflow. This means the denominator of the softmax will become 0, so the final result is undefined. When c is very large and positive, exp(c) will overflow, again resulting in the expression as a whole being undefined. Both of these difficulties can be resolved by instead evaluating softmax(z) where z = x − maxi xi. Simple algebra shows that the value of the softmax function is not changed analytically by adding or subtracting a scalar from the input vector. Subtracting maxi xi results in the largest argument to exp being 0, which rules out the possibility of overflow. Likewise, at least one term in the denominator has a value of 1, which rules out the possibility of underflow in the denominator leading to a division by zero. There is still one small problem. Underflow in the numerator can still cause the expression as a whole to evaluate to zero. This means that if we implement log softmax(x) by first running the softmax subroutine then passing the result to the log function, we could erroneously obtain −∞. Instead, we must implement a separate function that calculates log softmax in a numerically stable way. The log softmax function can be stabilized using the same trick as we used to stabilize the function. softmax For the most part, we do not explicitly detail all of the numerical considerations involved in implementing the various algorithms described in this book. Developers of low-level libraries should keep numerical issues in mind when implementing deep learning algorithms. Most readers of this book can simply rely on low- level libraries that provide stable implementations. In some cases, it is possible to implement a new algorithm and have the new implementation automatically 81
  • 98. CHAPTER 4. NUMERICAL COMPUTATION stabilized. Theano ( , ; , ) is an example Bergstra et al. 2010 Bastien et al. 2012 of a software package that automatically detects and stabilizes many common numerically unstable expressions that arise in the context of deep learning. 4.2 Poor Conditioning Conditioning refers to how rapidly a function changes with respect to small changes in its inputs. Functions that change rapidly when their inputs are perturbed slightly can be problematic for scientific computation because rounding errors in the inputs can result in large changes in the output. Consider the function f(x) = A−1 x. When A ∈ Rn n × has an eigenvalue decomposition, its condition number is max i,j     λi λj    . (4.2) This is the ratio of the magnitude of the largest and smallest eigenvalue. When this number is large, matrix inversion is particularly sensitive to error in the input. This sensitivity is an intrinsic property of the matrix itself, not the result of rounding error during matrix inversion. Poorly conditioned matrices amplify pre-existing errors when we multiply by the true matrix inverse. In practice, the error will be compounded further by numerical errors in the inversion process itself. 4.3 Gradient-Based Optimization Most deep learning algorithms involve optimization of some sort. Optimization refers to the task of either minimizing or maximizing some function f(x) by altering x. We usually phrase most optimization problems in terms of minimizing f(x). Maximization may be accomplished via a minimization algorithm by minimizing −f( ) x . The function we want to minimize or maximize is called the objective func- tion or criterion. When we are minimizing it, we may also call it the cost function, loss function, or error function. In this book, we use these terms interchangeably, though some machine learning publications assign special meaning to some of these terms. We often denote the value that minimizes or maximizes a function with a superscript . For example, we might say ∗ x∗ = arg min ( ) f x . 82
  • 99. CHAPTER 4. NUMERICAL COMPUTATION − − − − 2 0 . 1 5 . 1 0 . 0 5 0 0 0 5 1 0 1 5 2 0 . . . . . . x −2 0 . −1 5 . −1 0 . −0 5 . 0 0 . 0 5 . 1 0 . 1 5 . 2 0 . Global minimum at = 0. x Since f( ) = 0, gradient x descent halts here. For 0, we have x < f( ) 0, x < so we can decrease by f moving rightward. For 0, we have x > f ( ) 0, x > so we can decrease by f moving leftward. f x ( ) = 1 2 x2 f ( ) = x x Figure 4.1: An illustration of how the gradient descent algorithm uses the derivatives of a function can be used to follow the function downhill to a minimum. We assume the reader is already familiar with calculus, but provide a brief review of how calculus concepts relate to optimization here. Suppose we have a function y = f(x), where both x and y are real numbers. The derivative of this function is denoted as f(x) or as dy dx. The derivative f (x) gives the slope of f (x) at the point x. In other words, it specifies how to scale a small change in the input in order to obtain the corresponding change in the output: f x  f x f ( + ) ≈ ( ) + ( ) x . The derivative is therefore useful for minimizing a function because it tells us how to change x in order to make a small improvement in y. For example, we know that f(x  − sign(f(x))) is less than f(x) for small enough . We can thus reduce f(x) by moving x in small steps with opposite sign of the derivative. This technique is called gradient descent (Cauchy 1847 , ). See figure for an 4.1 example of this technique. When f (x) = 0, the derivative provides no information about which direction to move. Points where f (x) = 0 are known as critical points or stationary points. A local minimum is a point where f(x) is lower than at all neighboring points, so it is no longer possible to decrease f(x) by making infinitesimal steps. A local maximum is a point where f(x) is higher than at all neighboring points, 83
  • 100. CHAPTER 4. NUMERICAL COMPUTATION Minimum Maximum Saddle point Figure 4.2: Examples of each of the three types of critical points in 1-D. A critical point is a point with zero slope. Such a point can either be a local minimum, which is lower than the neighboring points, a local maximum, which is higher than the neighboring points, or a saddle point, which has neighbors that are both higher and lower than the point itself. so it is not possible to increase f (x) by making infinitesimal steps. Some critical points are neither maxima nor minima. These are known as saddle points. See figure for examples of each type of critical point. 4.2 A point that obtains the absolute lowest value of f(x) is a global minimum. It is possible for there to be only one global minimum or multiple global minima of the function. It is also possible for there to be local minima that are not globally optimal. In the context of deep learning, we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. All of this makes optimization very difficult, especially when the input to the function is multidimensional. We therefore usually settle for finding a value of f that is very low, but not necessarily minimal in any formal sense. See figure for an example. 4.3 We often minimize functions that have multiple inputs: f : Rn → R. For the concept of “minimization” to make sense, there must still be only one (scalar) output. For functions with multiple inputs, we must make use of the concept of partial derivatives. The partial derivative ∂ ∂xi f(x) measures how f changes as only the variable xi increases at point x. The gradient generalizes the notion of derivative to the case where the derivative is with respect to a vector: the gradient of f is the vector containing all of the partial derivatives, denoted ∇xf(x). Element i of the gradient is the partial derivative of f with respect to xi. In multiple dimensions, 84
  • 101. CHAPTER 4. NUMERICAL COMPUTATION x f x ( ) Ideally, we would like to arrive at the global minimum, but this might not be possible. This local minimum performs nearly as well as the global one, so it is an acceptable halting point. This local minimum performs poorly and should be avoided. Figure 4.3: Optimization algorithms may fail to find a global minimum when there are multiple local minima or plateaus present. In the context of deep learning, we generally accept such solutions even though they are not truly minimal, so long as they correspond to significantly low values of the cost function. critical points are points where every element of the gradient is equal to zero. The directional derivative in direction (a unit vector) is the slope of the u function f in direction u. In other words, the directional derivative is the derivative of the function f(x + αu) with respect to α, evaluated at α = 0. Using the chain rule, we can see that ∂ ∂αf α ( + x u) evaluates to u∇xf α ( ) x when = 0. To minimize f, we would like to find the direction in which f decreases the fastest. We can do this using the directional derivative: min u u , u=1 u ∇xf( ) x (4.3) = min u u , u=1 || || u 2||∇xf( ) x ||2 cos θ (4.4) where θ is the angle between u and the gradient. Substituting in || || u 2 = 1 and ignoring factors that do not depend on u, this simplifies to minu cos θ. This is minimized when u points in the opposite direction as the gradient. In other words, the gradient points directly uphill, and the negative gradient points directly downhill. We can decrease f by moving in the direction of the negative gradient. This is known as the or . method of steepest descent gradient descent Steepest descent proposes a new point x = x − ∇  xf( ) x (4.5) 85
  • 102. CHAPTER 4. NUMERICAL COMPUTATION where  is the learning rate, a positive scalar determining the size of the step. We can choose  in several different ways. A popular approach is to set  to a small constant. Sometimes, we can solve for the step size that makes the directional derivative vanish. Another approach is to evaluate f  (x − ∇xf( )) x for several values of  and choose the one that results in the smallest objective function value. This last strategy is called a line search. Steepest descent converges when every element of the gradient is zero (or, in practice, very close to zero). In some cases, we may be able to avoid running this iterative algorithm, and just jump directly to the critical point by solving the equation ∇xf( ) = 0 x for . x Although gradient descent is limited to optimization in continuous spaces, the general concept of repeatedly making a small move (that is approximately the best small move) towards better configurations can be generalized to discrete spaces. Ascending an objective function of discrete parameters is called hill climbing ( , ). Russel and Norvig 2003 4.3.1 Beyond the Gradient: Jacobian and Hessian Matrices Sometimes we need to find all of the partial derivatives of a function whose input and output are both vectors. The matrix containing all such partial derivatives is known as a Jacobian matrix. Specifically, if we have a function f : Rm → Rn, then the Jacobian matrix J ∈ Rn m × of is defined such that f Ji,j = ∂ ∂xj f( ) x i. We are also sometimes interested in a derivative of a derivative. This is known as a second derivative. For example, for a function f : Rn → R, the derivative with respect to xi of the derivative of f with respect to xj is denoted as ∂2 ∂xi ∂xj f. In a single dimension, we can denote d2 dx2 f by f  (x). The second derivative tells us how the first derivative will change as we vary the input. This is important because it tells us whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone. We can think of the second derivative as measuring curvature. Suppose we have a quadratic function (many functions that arise in practice are not quadratic but can be approximated well as quadratic, at least locally). If such a function has a second derivative of zero, then there is no curvature. It is a perfectly flat line, and its value can be predicted using only the gradient. If the gradient is , then we can make a step of size 1  along the negative gradient, and the cost function will decrease by . If the second derivative is negative, the function curves downward, so the cost function will actually decrease by more than . Finally, if the second derivative is positive, the function curves upward, so the cost function can decrease by less than . See 86
  • 103. CHAPTER 4. NUMERICAL COMPUTATION x f x ( ) Negative curvature x f x ( ) No curvature x f x ( ) Positive curvature Figure 4.4: The second derivative determines the curvature of a function. Here we show quadratic functions with various curvature. The dashed line indicates the value of the cost function we would expect based on the gradient information alone as we make a gradient step downhill. In the case of negative curvature, the cost function actually decreases faster than the gradient predicts. In the case of no curvature, the gradient predicts the decrease correctly. In the case of positive curvature, the function decreases slower than expected and eventually begins to increase, so steps that are too large can actually increase the function inadvertently. figure to see how different forms of curvature affect the relationship between 4.4 the value of the cost function predicted by the gradient and the true value. When our function has multiple input dimensions, there are many second derivatives. These derivatives can be collected together into a matrix called the Hessian matrix. The Hessian matrix is defined such that H x ( )( f ) H x ( )( f )i,j = ∂2 ∂xi∂xj f . ( ) x (4.6) Equivalently, the Hessian is the Jacobian of the gradient. Anywhere that the second partial derivatives are continuous, the differential operators are commutative, i.e. their order can be swapped: ∂2 ∂xi∂xj f( ) = x ∂2 ∂xj∂xi f . ( ) x (4.7) This implies that Hi,j = Hj,i, so the Hessian matrix is symmetric at such points. Most of the functions we encounter in the context of deep learning have a symmetric Hessian almost everywhere. Because the Hessian matrix is real and symmetric, we can decompose it into a set of real eigenvalues and an orthogonal basis of 87
  • 104. CHAPTER 4. NUMERICAL COMPUTATION eigenvectors. The second derivative in a specific direction represented by a unit vector d is given by d Hd. When d is an eigenvector of H , the second derivative in that direction is given by the corresponding eigenvalue. For other directions of d, the directional second derivative is a weighted average of all of the eigenvalues, with weights between 0 and 1, and eigenvectors that have smaller angle with d receiving more weight. The maximum eigenvalue determines the maximum second derivative and the minimum eigenvalue determines the minimum second derivative. The (directional) second derivative tells us how well we can expect a gradient descent step to perform. We can make a second-order Taylor series approximation to the function around the current point f( ) x x(0) : f f ( ) x ≈ (x(0) ) + (x x − (0) ) g + 1 2 (x x − (0) ) H x x ( − (0) ). (4.8) where g is the gradient and H is the Hessian at x(0). If we use a learning rate of , then the new point x will be given by x(0) − g. Substituting this into our approximation, we obtain f(x(0) − ≈ g) f(x(0) ) − g g + 1 2 2 g Hg. (4.9) There are three terms here: the original value of the function, the expected improvement due to the slope of the function, and the correction we must apply to account for the curvature of the function. When this last term is too large, the gradient descent step can actually move uphill. When gHg is zero or negative, the Taylor series approximation predicts that increasing  forever will decrease f forever. In practice, the Taylor series is unlikely to remain accurate for large , so one must resort to more heuristic choices of  in this case. When gHg is positive, solving for the optimal step size that decreases the Taylor series approximation of the function the most yields ∗ = gg g Hg . (4.10) In the worst case, when g aligns with the eigenvector of H corresponding to the maximal eigenvalue λmax, then this optimal step size is given by 1 λmax . To the extent that the function we minimize can be approximated well by a quadratic function, the eigenvalues of the Hessian thus determine the scale of the learning rate. The second derivative can be used to determine whether a critical point is a local maximum, a local minimum, or saddle point. Recall that on a critical point, f (x) = 0. When the second derivative f (x) > 0, the first derivative f (x) increases as we move to the right and decreases as we move to the left. This means 88
  • 105. CHAPTER 4. NUMERICAL COMPUTATION f (x  − ) < 0 and f  (x + ) > 0 for small enough . In other words, as we move right, the slope begins to point uphill to the right, and as we move left, the slope begins to point uphill to the left. Thus, when f (x) = 0 and f (x) > 0, we can conclude that x is a local minimum. Similarly, when f (x) = 0 and f  (x) < 0, we can conclude that x is a local maximum. This is known as the second derivative test. Unfortunately, when f (x) = 0, the test is inconclusive. In this case x may be a saddle point, or a part of a flat region. In multiple dimensions, we need to examine all of the second derivatives of the function. Using the eigendecomposition of the Hessian matrix, we can generalize the second derivative test to multiple dimensions. At a critical point, where ∇xf(x) = 0, we can examine the eigenvalues of the Hessian to determine whether the critical point is a local maximum, local minimum, or saddle point. When the Hessian is positive definite (all its eigenvalues are positive), the point is a local minimum. This can be seen by observing that the directional second derivative in any direction must be positive, and making reference to the univariate second derivative test. Likewise, when the Hessian is negative definite (all its eigenvalues are negative), the point is a local maximum. In multiple dimensions, it is actually possible to find positive evidence of saddle points in some cases. When at least one eigenvalue is positive and at least one eigenvalue is negative, we know that x is a local maximum on one cross section of f but a local minimum on another cross section. See figure for an example. Finally, the multidimensional second 4.5 derivative test can be inconclusive, just like the univariate version. The test is inconclusive whenever all of the non-zero eigenvalues have the same sign, but at least one eigenvalue is zero. This is because the univariate second derivative test is inconclusive in the cross section corresponding to the zero eigenvalue. In multiple dimensions, there is a different second derivative for each direction at a single point. The condition number of the Hessian at this point measures how much the second derivatives differ from each other. When the Hessian has a poor condition number, gradient descent performs poorly. This is because in one direction, the derivative increases rapidly, while in another direction, it increases slowly. Gradient descent is unaware of this change in the derivative so it does not know that it needs to explore preferentially in the direction where the derivative remains negative for longer. It also makes it difficult to choose a good step size. The step size must be small enough to avoid overshooting the minimum and going uphill in directions with strong positive curvature. This usually means that the step size is too small to make significant progress in other directions with less curvature. See figure for an example. 4.6 This issue can be resolved by using information from the Hessian matrix to guide 89
  • 106. CHAPTER 4. NUMERICAL COMPUTATION  󰤓    󰤓          󰤓   Figure 4.5: A saddle point containing both positive and negative curvature. The function in this example is f (x) = x2 1 − x2 2. Along the axis corresponding to x1, the function curves upward. This axis is an eigenvector of the Hessian and has a positive eigenvalue. Along the axis corresponding to x2, the function curves downward. This direction is an eigenvector of the Hessian with negative eigenvalue. The name “saddle point” derives from the saddle-like shape of this function. This is the quintessential example of a function with a saddle point. In more than one dimension, it is not necessary to have an eigenvalue of 0 in order to get a saddle point: it is only necessary to have both positive and negative eigenvalues. We can think of a saddle point with both signs of eigenvalues as being a local maximum within one cross section and a local minimum within another cross section. 90
  • 107. CHAPTER 4. NUMERICAL COMPUTATION − − − 30 20 10 0 10 20 x1 −30 −20 −10 0 10 20 x 2 Figure 4.6: Gradient descent fails to exploit the curvature information contained in the Hessian matrix. Here we use gradient descent to minimize a quadratic functionf(x) whose Hessian matrix has condition number 5. This means that the direction of most curvature has five times more curvature than the direction of least curvature. In this case, the most curvature is in the direction [1, 1] and the least curvature is in the direction [1, −1]. The red lines indicate the path followed by gradient descent. This very elongated quadratic function resembles a long canyon. Gradient descent wastes time repeatedly descending canyon walls, because they are the steepest feature. Because the step size is somewhat too large, it has a tendency to overshoot the bottom of the function and thus needs to descend the opposite canyon wall on the next iteration. The large positive eigenvalue of the Hessian corresponding to the eigenvector pointed in this direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the Hessian could predict that the steepest direction is not actually a promising search direction in this context. 91
  • 108. CHAPTER 4. NUMERICAL COMPUTATION the search. The simplest method for doing so is known as Newton’s method. Newton’s method is based on using a second-order Taylor series expansion to approximate near some point f( ) x x(0) : f f ( ) x ≈ (x(0) )+(x x − (0) ) ∇xf(x(0) )+ 1 2 (x x − (0) ) H x ( )( f (0) )(x x − (0) ). (4.11) If we then solve for the critical point of this function, we obtain: x∗ = x(0) − H x ( )( f (0) )−1 ∇xf(x(0) ). (4.12) When f is a positive definite quadratic function, Newton’s method consists of applying equation once to jump to the minimum of the function directly. 4.12 When f is not truly quadratic but can be locally approximated as a positive definite quadratic, Newton’s method consists of applying equation multiple 4.12 times. Iteratively updating the approximation and jumping to the minimum of the approximation can reach the critical point much faster than gradient descent would. This is a useful property near a local minimum, but it can be a harmful property near a saddle point. As discussed in section , Newton’s method is 8.2.3 only appropriate when the nearby critical point is a minimum (all the eigenvalues of the Hessian are positive), whereas gradient descent is not attracted to saddle points unless the gradient points toward them. Optimization algorithms that use only the gradient, such as gradient descent, are called first-order optimization algorithms. Optimization algorithms that also use the Hessian matrix, such as Newton’s method, are called second-order optimization algorithms (Nocedal and Wright 2006 , ). The optimization algorithms employed in most contexts in this book are applicable to a wide variety of functions, but come with almost no guarantees. Deep learning algorithms tend to lack guarantees because the family of functions used in deep learning is quite complicated. In many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions. In the context of deep learning, we sometimes gain some guarantees by restrict- ing ourselves to functions that are either Lipschitz continuous or have Lipschitz continuous derivatives. A Lipschitz continuous function is a function f whose rate of change is bounded by a Lipschitz constant L: ∀ ∀ | − | ≤ L|| − || x, y, f( ) x f( ) y x y 2. (4.13) This property is useful because it allows us to quantify our assumption that a small change in the input made by an algorithm such as gradient descent will have 92
  • 109. CHAPTER 4. NUMERICAL COMPUTATION a small change in the output. Lipschitz continuity is also a fairly weak constraint, and many optimization problems in deep learning can be made Lipschitz continuous with relatively minor modifications. Perhaps the most successful field of specialized optimization is convex op- timization. Convex optimization algorithms are able to provide many more guarantees by making stronger restrictions. Convex optimization algorithms are applicable only to convex functions—functions for which the Hessian is positive semidefinite everywhere. Such functions are well-behaved because they lack saddle points and all of their local minima are necessarily global minima. However, most problems in deep learning are difficult to express in terms of convex optimization. Convex optimization is used only as a subroutine of some deep learning algorithms. Ideas from the analysis of convex optimization algorithms can be useful for proving the convergence of deep learning algorithms. However, in general, the importance of convex optimization is greatly diminished in the context of deep learning. For more information about convex optimization, see Boyd and Vandenberghe 2004 ( ) or Rockafellar 1997 ( ). 4.4 Constrained Optimization Sometimes we wish not only to maximize or minimize a function f(x) over all possible values of x. Instead we may wish to find the maximal or minimal value of f(x) for values of x in some set S. This is known as constrained optimization. Points x that lie within the set S are called feasible points in constrained optimization terminology. We often wish to find a solution that is small in some sense. A common approach in such situations is to impose a norm constraint, such as . || || ≤ x 1 One simple approach to constrained optimization is simply to modify gradient descent taking the constraint into account. If we use a small constant step size , we can make gradient descent steps, then project the result back into S. If we use a line search, we can search only over step sizes  that yield new x points that are feasible, or we can project each point on the line back into the constraint region. When possible, this method can be made more efficient by projecting the gradient into the tangent space of the feasible region before taking the step or beginning the line search ( , ). Rosen 1960 A more sophisticated approach is to design a different, unconstrained opti- mization problem whose solution can be converted into a solution to the original, constrained optimization problem. For example, if we want to minimize f(x) for 93
  • 110. CHAPTER 4. NUMERICAL COMPUTATION x ∈ R2 with x constrained to have exactly unit L2 norm, we can instead minimize g(θ) = f ([cos sin θ, θ] ) with respect to θ, then return [cos sin θ, θ] as the solution to the original problem. This approach requires creativity; the transformation between optimization problems must be designed specifically for each case we encounter. The Karush–Kuhn–Tucker (KKT) approach1 provides a very general so- lution to constrained optimization. With the KKT approach, we introduce a new function called the generalized Lagrangian or generalized Lagrange function. To define the Lagrangian, we first need to describe S in terms of equations and inequalities. We want a description of S in terms of m functions g( ) i and n functions h( ) j so that S = { | ∀ x i, g( ) i (x) = 0 and ∀j, h( ) j (x) ≤ 0}. The equations involving g( ) i are called the equality constraints and the inequalities involving h( ) j are called . inequality constraints We introduce new variables λi andαj for each constraint, these are called the KKT multipliers. The generalized Lagrangian is then defined as L , , f (x λ α) = ( ) + x  i λi g( ) i ( ) + x  j αjh( ) j ( ) x . (4.14) We can now solve a constrained minimization problem using unconstrained optimization of the generalized Lagrangian. Observe that, so long as at least one feasible point exists and is not permitted to have value , then f( ) x ∞ min x max λ max α α , ≥0 L , , . (x λ α) (4.15) has the same optimal objective function value and set of optimal points as x min x∈S f . ( ) x (4.16) This follows because any time the constraints are satisfied, max λ max α α , ≥0 L , , f , (x λ α) = ( ) x (4.17) while any time a constraint is violated, max λ max α α , ≥0 L , , . (x λ α) = ∞ (4.18) 1 The KKT approach generalizes the method of Lagrange multipliers which allows equality constraints but not inequality constraints. 94
  • 111. CHAPTER 4. NUMERICAL COMPUTATION These properties guarantee that no infeasible point can be optimal, and that the optimum within the feasible points is unchanged. To perform constrained maximization, we can construct the generalized La- grange function of , which leads to this optimization problem: −f( ) x min x max λ max α α , ≥0 −f( ) + x  i λig( ) i ( ) + x  j αjh( ) j ( ) x . (4.19) We may also convert this to a problem with maximization in the outer loop: max x min λ min α α , ≥0 f( ) + x  i λig( ) i ( ) x −  j αjh( ) j ( ) x . (4.20) The sign of the term for the equality constraints does not matter; we may define it with addition or subtraction as we wish, because the optimization is free to choose any sign for each λi. The inequality constraints are particularly interesting. We say that a constraint h( ) i (x) is active if h( ) i (x∗) = 0. If a constraint is not active, then the solution to the problem found using that constraint would remain at least a local solution if that constraint were removed. It is possible that an inactive constraint excludes other solutions. For example, a convex problem with an entire region of globally optimal points (a wide, flat, region of equal cost) could have a subset of this region eliminated by constraints, or a non-convex problem could have better local stationary points excluded by a constraint that is inactive at convergence. However, the point found at convergence remains a stationary point whether or not the inactive constraints are included. Because an inactive h( ) i has negative value, then the solution to minx maxλ maxα α , ≥0 L(x λ α , , ) will have αi = 0. We can thus observe that at the solution, α h  (x) = 0. In other words, for all i, we know that at least one of the constraints αi ≥ 0 and h( ) i (x) ≤ 0 must be active at the solution. To gain some intuition for this idea, we can say that either the solution is on the boundary imposed by the inequality and we must use its KKT multiplier to influence the solution to x, or the inequality has no influence on the solution and we represent this by zeroing out its KKT multiplier. A simple set of properties describe the optimal points of constrained opti- mization problems. These properties are called the Karush-Kuhn-Tucker (KKT) conditions ( , ; Karush 1939 Kuhn and Tucker 1951 , ). They are necessary conditions, but not always sufficient conditions, for a point to be optimal. The conditions are: • The gradient of the generalized Lagrangian is zero. • All constraints on both and the KKT multipliers are satisfied. x 95
  • 112. CHAPTER 4. NUMERICAL COMPUTATION • The inequality constraints exhibit “complementary slackness”: α h  (x) = 0. For more information about the KKT approach, see Nocedal and Wright 2006 ( ). 4.5 Example: Linear Least Squares Suppose we want to find the value of that minimizes x f( ) = x 1 2 || − || Ax b 2 2. (4.21) There are specialized linear algebra algorithms that can solve this problem efficiently. However, we can also explore how to solve it using gradient-based optimization as a simple example of how these techniques work. First, we need to obtain the gradient: ∇x f( ) = x A ( ) = Ax b − A Ax A −  b. (4.22) We can then follow this gradient downhill, taking small steps. See algorithm 4.1 for details. Algorithm 4.1 An algorithm to minimize f(x) = 1 2 || − || Ax b 2 2 with respect to x using gradient descent, starting from an arbitrary value of . x Set the step size ( ) and tolerance ( ) to small, positive numbers.  δ while ||AAx A − b||2 > δ do x x ← −   AAx A − b  end while One can also solve this problem using Newton’s method. In this case, because the true function is quadratic, the quadratic approximation employed by Newton’s method is exact, and the algorithm converges to the global minimum in a single step. Now suppose we wish to minimize the same function, but subject to the constraint xx ≤ 1. To do so, we introduce the Lagrangian L , λ f λ (x ) = ( ) + x  x x − 1  . (4.23) We can now solve the problem min x max λ,λ≥0 L , λ . (x ) (4.24) 96
  • 113. CHAPTER 4. NUMERICAL COMPUTATION The smallest-norm solution to the unconstrained least squares problem may be found using the Moore-Penrose pseudoinverse: x = A+ b. If this point is feasible, then it is the solution to the constrained problem. Otherwise, we must find a solution where the constraint is active. By differentiating the Lagrangian with respect to , we obtain the equation x A Ax A −  b x + 2λ = 0. (4.25) This tells us that the solution will take the form x A = (  A I + 2λ )−1 A b. (4.26) The magnitude of λ must be chosen such that the result obeys the constraint. We can find this value by performing gradient ascent on . To do so, observe λ ∂ ∂λ L , λ (x ) = x x − 1. (4.27) When the norm of x exceeds 1, this derivative is positive, so to follow the derivative uphill and increase the Lagrangian with respect to λ, we increase λ. Because the coefficient on the xx penalty has increased, solving the linear equation for x will now yield a solution with smaller norm. The process of solving the linear equation and adjusting λ continues until x has the correct norm and the derivative on λ is 0. This concludes the mathematical preliminaries that we use to develop machine learning algorithms. We are now ready to build and analyze some full-fledged learning systems. 97
  • 114. Chapter 5 Machine Learning Basics Deep learning is a specific kind of machine learning. In order to understand deep learning well, one must have a solid understanding of the basic principles of machine learning. This chapter provides a brief course in the most important general principles that will be applied throughout the rest of the book. Novice readers or those who want a wider perspective are encouraged to consider machine learning textbooks with a more comprehensive coverage of the fundamentals, such as Murphy ( ) or ( ). If you are already familiar with machine learning basics, 2012 Bishop 2006 feel free to skip ahead to section . That section covers some perspectives 5.11 on traditional machine learning techniques that have strongly influenced the development of deep learning algorithms. We begin with a definition of what a learning algorithm is, and present an example: the linear regression algorithm. We then proceed to describe how the challenge of fitting the training data differs from the challenge of finding patterns that generalize to new data. Most machine learning algorithms have settings called hyperparameters that must be determined external to the learning algorithm itself; we discuss how to set these using additional data. Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions; we therefore present the two central approaches to statistics: frequentist estimators and Bayesian inference. Most machine learning algorithms can be divided into the categories of supervised learning and unsupervised learning; we describe these categories and give some examples of simple learning algorithms from each category. Most deep learning algorithms are based on an optimization algorithm called stochastic gradient descent. We describe how to combine various algorithm components such as 98
  • 115. CHAPTER 5. MACHINE LEARNING BASICS an optimization algorithm, a cost function, a model, and a dataset to build a machine learning algorithm. Finally, in section , we describe some of the 5.11 factors that have limited the ability of traditional machine learning to generalize. These challenges have motivated the development of deep learning algorithms that overcome these obstacles. 5.1 Learning Algorithms A machine learning algorithm is an algorithm that is able to learn from data. But what do we mean by learning? Mitchell 1997 ( ) provides the definition “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” One can imagine a very wide variety of experiences E, tasks T, and performance measures P , and we do not make any attempt in this book to provide a formal definition of what may be used for each of these entities. Instead, the following sections provide intuitive descriptions and examples of the different kinds of tasks, performance measures and experiences that can be used to construct machine learning algorithms. 5.1.1 The Task, T Machine learning allows us to tackle tasks that are too difficult to solve with fixed programs written and designed by human beings. From a scientific and philosophical point of view, machine learning is interesting because developing our understanding of machine learning entails developing our understanding of the principles that underlie intelligence. In this relatively formal definition of the word “task,” the process of learning itself is not the task. Learning is our means of attaining the ability to perform the task. For example, if we want a robot to be able to walk, then walking is the task. We could program the robot to learn to walk, or we could attempt to directly write a program that specifies how to walk manually. Machine learning tasks are usually described in terms of how the machine learning system should process an example. An example is a collection of features that have been quantitatively measured from some object or event that we want the machine learning system to process. We typically represent an example as a vector x ∈ Rn where each entry xi of the vector is another feature. For example, the features of an image are usually the values of the pixels in the image. 99
  • 116. CHAPTER 5. MACHINE LEARNING BASICS Many kinds of tasks can be solved with machine learning. Some of the most common machine learning tasks include the following: • Classification: In this type of task, the computer program is asked to specify which of k categories some input belongs to. To solve this task, the learning algorithm is usually asked to produce a function f : Rn → {1, . . . , k}. When y = f(x), the model assigns an input described by vector x to a category identified by numeric code y. There are other variants of the classification task, for example, where f outputs a probability distribution over classes. An example of a classification task is object recognition, where the input is an image (usually described as a set of pixel brightness values), and the output is a numeric code identifying the object in the image. For example, the Willow Garage PR2 robot is able to act as a waiter that can recognize different kinds of drinks and deliver them to people on command (Good- fellow 2010 et al., ). Modern object recognition is best accomplished with deep learning ( , ; , ). Object Krizhevsky et al. 2012 Ioffe and Szegedy 2015 recognition is the same basic technology that allows computers to recognize faces (Taigman 2014 et al., ), which can be used to automatically tag people in photo collections and allow computers to interact more naturally with their users. • Classification with missing inputs: Classification becomes more chal- lenging if the computer program is not guaranteed that every measurement in its input vector will always be provided. In order to solve the classification task, the learning algorithm only has to define a function mapping single from a vector input to a categorical output. When some of the inputs may be missing, rather than providing a single classification function, the learning algorithm must learn a of functions. Each function corresponds to classi- set fying x with a different subset of its inputs missing. This kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. One way to efficiently define such a large set of functions is to learn a probability distribution over all of the relevant variables, then solve the classification task by marginalizing out the missing variables. With n input variables, we can now obtain all 2n different classifi- cation functions needed for each possible set of missing inputs, but we only need to learn a single function describing the joint probability distribution. See Goodfellow 2013b et al. ( ) for an example of a deep probabilistic model applied to such a task in this way. Many of the other tasks described in this section can also be generalized to work with missing inputs; classification with missing inputs is just one example of what machine learning can do. 100
  • 117. CHAPTER 5. MACHINE LEARNING BASICS • Regression: In this type of task, the computer program is asked to predict a numerical value given some input. To solve this task, the learning algorithm is asked to output a function f : Rn → R. This type of task is similar to classification, except that the format of output is different. An example of a regression task is the prediction of the expected claim amount that an insured person will make (used to set insurance premiums), or the prediction of future prices of securities. These kinds of predictions are also used for algorithmic trading. • Transcription: In this type of task, the machine learning system is asked to observe a relatively unstructured representation of some kind of data and transcribe it into discrete, textual form. For example, in optical character recognition, the computer program is shown a photograph containing an image of text and is asked to return this text in the form of a sequence of characters (e.g., in ASCII or Unicode format). Google Street View uses deep learning to process address numbers in this way ( , Goodfellow et al. 2014d). Another example is speech recognition, where the computer program is provided an audio waveform and emits a sequence of characters or word ID codes describing the words that were spoken in the audio recording. Deep learning is a crucial component of modern speech recognition systems used at major companies including Microsoft, IBM and Google ( , Hinton et al. 2012b). • Machine translation: In a machine translation task, the input already consists of a sequence of symbols in some language, and the computer program must convert this into a sequence of symbols in another language. This is commonly applied to natural languages, such as translating from English to French. Deep learning has recently begun to have an important impact on this kind of task (Sutskever 2014 Bahdanau 2015 et al., ; et al., ). • Structured output: Structured output tasks involve any task where the output is a vector (or other data structure containing multiple values) with important relationships between the different elements. This is a broad category, and subsumes the transcription and translation tasks described above, but also many other tasks. One example is parsing—mapping a natural language sentence into a tree that describes its grammatical structure and tagging nodes of the trees as being verbs, nouns, or adverbs, and so on. See ( ) for an example of deep learning applied to a parsing Collobert 2011 task. Another example is pixel-wise segmentation of images, where the computer program assigns every pixel in an image to a specific category. For 101
  • 118. CHAPTER 5. MACHINE LEARNING BASICS example, deep learning can be used to annotate the locations of roads in aerial photographs (Mnih and Hinton 2010 , ). The output need not have its form mirror the structure of the input as closely as in these annotation-style tasks. For example, in image captioning, the computer program observes an image and outputs a natural language sentence describing the image (Kiros et al. et al. , , ; 2014a b Mao , ; 2015 Vinyals 2015b Donahue 2014 et al., ; et al., ; Karpathy and Li 2015 Fang 2015 Xu 2015 , ; et al., ; et al., ). These tasks are called structured output tasks because the program must output several values that are all tightly inter-related. For example, the words produced by an image captioning program must form a valid sentence. • Anomaly detection: In this type of task, the computer program sifts through a set of events or objects, and flags some of them as being unusual or atypical. An example of an anomaly detection task is credit card fraud detection. By modeling your purchasing habits, a credit card company can detect misuse of your cards. If a thief steals your credit card or credit card information, the thief’s purchases will often come from a different probability distribution over purchase types than your own. The credit card company can prevent fraud by placing a hold on an account as soon as that card has been used for an uncharacteristic purchase. See ( ) for a Chandola et al. 2009 survey of anomaly detection methods. • Synthesis and sampling: In this type of task, the machine learning al- gorithm is asked to generate new examples that are similar to those in the training data. Synthesis and sampling via machine learning can be useful for media applications where it can be expensive or boring for an artist to generate large volumes of content by hand. For example, video games can automatically generate textures for large objects or landscapes, rather than requiring an artist to manually label each pixel ( , ). In some Luo et al. 2013 cases, we want the sampling or synthesis procedure to generate some specific kind of output given the input. For example, in a speech synthesis task, we provide a written sentence and ask the program to emit an audio waveform containing a spoken version of that sentence. This is a kind of structured output task, but with the added qualification that there is no single correct output for each input, and we explicitly desire a large amount of variation in the output, in order for the output to seem more natural and realistic. • Imputation of missing values: In this type of task, the machine learning algorithm is given a new example x ∈ Rn, but with some entries xi of x missing. The algorithm must provide a prediction of the values of the missing entries. 102
  • 119. CHAPTER 5. MACHINE LEARNING BASICS • Denoising: In this type of task, the machine learning algorithm is given in input a corrupted example x̃ ∈ Rn obtained by an unknown corruption process from a clean example x ∈ Rn . The learner must predict the clean example x from its corrupted version x̃, or more generally predict the conditional probability distribution p(x | x̃). • Density estimation or probability mass function estimation: In the density estimation problem, the machine learning algorithm is asked to learn a function pmodel : Rn → R, where pmodel(x) can be interpreted as a probability density function (if x is continuous) or a probability mass function (if x is discrete) on the space that the examples were drawn from. To do such a task well (we will specify exactly what that means when we discuss performance measures P), the algorithm needs to learn the structure of the data it has seen. It must know where examples cluster tightly and where they are unlikely to occur. Most of the tasks described above require the learning algorithm to at least implicitly capture the structure of the probability distribution. Density estimation allows us to explicitly capture that distribution. In principle, we can then perform computations on that distribution in order to solve the other tasks as well. For example, if we have performed density estimation to obtain a probability distribution p(x), we can use that distribution to solve the missing value imputation task. If a value xi is missing and all of the other values, denoted x−i, are given, then we know the distribution over it is given by p(xi | x−i). In practice, density estimation does not always allow us to solve all of these related tasks, because in many cases the required operations on p(x) are computationally intractable. Of course, many other tasks and types of tasks are possible. The types of tasks we list here are intended only to provide examples of what machine learning can do, not to define a rigid taxonomy of tasks. 5.1.2 The Performance Measure, P In order to evaluate the abilities of a machine learning algorithm, we must design a quantitative measure of its performance. Usually this performance measure P is specific to the task being carried out by the system. T For tasks such as classification, classification with missing inputs, and tran- scription, we often measure the accuracy of the model. Accuracy is just the proportion of examples for which the model produces the correct output. We can 103
  • 120. CHAPTER 5. MACHINE LEARNING BASICS also obtain equivalent information by measuring the error rate, the proportion of examples for which the model produces an incorrect output. We often refer to the error rate as the expected 0-1 loss. The 0-1 loss on a particular example is 0 if it is correctly classified and 1 if it is not. For tasks such as density estimation, it does not make sense to measure accuracy, error rate, or any other kind of 0-1 loss. Instead, we must use a different performance metric that gives the model a continuous-valued score for each example. The most common approach is to report the average log-probability the model assigns to some examples. Usually we are interested in how well the machine learning algorithm performs on data that it has not seen before, since this determines how well it will work when deployed in the real world. We therefore evaluate these performance measures using a test set of data that is separate from the data used for training the machine learning system. The choice of performance measure may seem straightforward and objective, but it is often difficult to choose a performance measure that corresponds well to the desired behavior of the system. In some cases, this is because it is difficult to decide what should be measured. For example, when performing a transcription task, should we measure the accuracy of the system at transcribing entire sequences, or should we use a more fine-grained performance measure that gives partial credit for getting some elements of the sequence correct? When performing a regression task, should we penalize the system more if it frequently makes medium-sized mistakes or if it rarely makes very large mistakes? These kinds of design choices depend on the application. In other cases, we know what quantity we would ideally like to measure, but measuring it is impractical. For example, this arises frequently in the context of density estimation. Many of the best probabilistic models represent probability distributions only implicitly. Computing the actual probability value assigned to a specific point in space in many such models is intractable. In these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion. 5.1.3 The Experience, E Machine learning algorithms can be broadly categorized as unsupervised or supervised by what kind of experience they are allowed to have during the learning process. Most of the learning algorithms in this book can be understood as being allowed to experience an entire dataset. A dataset is a collection of many examples, as 104
  • 121. CHAPTER 5. MACHINE LEARNING BASICS defined in section . Sometimes we will also call examples . 5.1.1 data points One of the oldest datasets studied by statisticians and machine learning re- searchers is the Iris dataset ( , ). It is a collection of measurements of Fisher 1936 different parts of 150 iris plants. Each individual plant corresponds to one example. The features within each example are the measurements of each of the parts of the plant: the sepal length, sepal width, petal length and petal width. The dataset also records which species each plant belonged to. Three different species are represented in the dataset. Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly as in density estimation or implicitly for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. Supervised learning algorithms experience a dataset containing features, but each example is also associated with a label or target. For example, the Iris dataset is annotated with the species of each iris plant. A supervised learning algorithm can study the Iris dataset and learn to classify iris plants into three different species based on their measurements. Roughly speaking, unsupervised learning involves observing several examples of a random vector x, and attempting to implicitly or explicitly learn the proba- bility distribution p(x), or some interesting properties of that distribution, while supervised learning involves observing several examples of a random vector x and an associated value or vector y, and learning to predict y from x, usually by estimating p(y x | ). The term supervised learning originates from the view of the target y being provided by an instructor or teacher who shows the machine learning system what to do. In unsupervised learning, there is no instructor or teacher, and the algorithm must learn to make sense of the data without this guide. Unsupervised learning and supervised learning are not formally defined terms. The lines between them are often blurred. Many machine learning technologies can be used to perform both tasks. For example, the chain rule of probability states that for a vector x ∈ Rn, the joint distribution can be decomposed as p( ) = x n  i=1 p(xi | x1, . . . , xi−1). (5.1) This decomposition means that we can solve the ostensibly unsupervised problem of modeling p(x) by splitting it into n supervised learning problems. Alternatively, we 105
  • 122. CHAPTER 5. MACHINE LEARNING BASICS can solve the supervised learning problem of learning p(y | x) by using traditional unsupervised learning technologies to learn the joint distribution p(x, y) and inferring p y ( | x) = p , y (x )  y p , y (x ) . (5.2) Though unsupervised learning and supervised learning are not completely formal or distinct concepts, they do help to roughly categorize some of the things we do with machine learning algorithms. Traditionally, people refer to regression, classification and structured output problems as supervised learning. Density estimation in support of other tasks is usually considered unsupervised learning. Other variants of the learning paradigm are possible. For example, in semi- supervised learning, some examples include a supervision target but others do not. In multi-instance learning, an entire collection of examples is labeled as containing or not containing an example of a class, but the individual members of the collection are not labeled. For a recent example of multi-instance learning with deep models, see Kotzias 2015 et al. ( ). Some machine learning algorithms do not just experience a fixed dataset. For example, reinforcement learning algorithms interact with an environment, so there is a feedback loop between the learning system and its experiences. Such algorithms are beyond the scope of this book. Please see ( ) Sutton and Barto 1998 or Bertsekas and Tsitsiklis 1996 ( ) for information about reinforcement learning, and ( ) for the deep learning approach to reinforcement learning. Mnih et al. 2013 Most machine learning algorithms simply experience a dataset. A dataset can be described in many ways. In all cases, a dataset is a collection of examples, which are in turn collections of features. One common way of describing a dataset is with a . A design design matrix matrix is a matrix containing a different example in each row. Each column of the matrix corresponds to a different feature. For instance, the Iris dataset contains 150 examples with four features for each example. This means we can represent the dataset with a design matrix X ∈ R150 4 × , where Xi,1 is the sepal length of plant i, Xi,2 is the sepal width of plant i, etc. We will describe most of the learning algorithms in this book in terms of how they operate on design matrix datasets. Of course, to describe a dataset as a design matrix, it must be possible to describe each example as a vector, and each of these vectors must be the same size. This is not always possible. For example, if you have a collection of photographs with different widths and heights, then different photographs will contain different numbers of pixels, so not all of the photographs may be described with the same length of vector. Section and chapter describe how to handle different 9.7 10 106
  • 123. CHAPTER 5. MACHINE LEARNING BASICS types of such heterogeneous data. In cases like these, rather than describing the dataset as a matrix with m rows, we will describe it as a set containing m elements: {x(1) , x(2) , . . . , x( ) m }. This notation does not imply that any two example vectors x( ) i and x( ) j have the same size. In the case of supervised learning, the example contains a label or target as well as a collection of features. For example, if we want to use a learning algorithm to perform object recognition from photographs, we need to specify which object appears in each of the photos. We might do this with a numeric code, with 0 signifying a person, 1 signifying a car, 2 signifying a cat, etc. Often when working with a dataset containing a design matrix of feature observations X, we also provide a vector of labels , with y yi providing the label for example . i Of course, sometimes the label may be more than just a single number. For example, if we want to train a speech recognition system to transcribe entire sentences, then the label for each example sentence is a sequence of words. Just as there is no formal definition of supervised and unsupervised learning, there is no rigid taxonomy of datasets or experiences. The structures described here cover most cases, but it is always possible to design new ones for new applications. 5.1.4 Example: Linear Regression Our definition of a machine learning algorithm as an algorithm that is capable of improving a computer program’s performance at some task via experience is somewhat abstract. To make this more concrete, we present an example of a simple machine learning algorithm: linear regression. We will return to this example repeatedly as we introduce more machine learning concepts that help to understand its behavior. As the name implies, linear regression solves a regression problem. In other words, the goal is to build a system that can take a vector x ∈ Rn as input and predict the value of a scalar y ∈ R as its output. In the case of linear regression, the output is a linear function of the input. Let ŷ be the value that our model predicts should take on. We define the output to be y ŷ = w x (5.3) where w ∈ Rn is a vector of . parameters Parameters are values that control the behavior of the system. In this case, wi is the coefficient that we multiply by feature xi before summing up the contributions from all the features. We can think of w as a set of weights that determine how each feature affects the prediction. If a feature xi receives a positive weight wi, 107
  • 124. CHAPTER 5. MACHINE LEARNING BASICS then increasing the value of that feature increases the value of our prediction ŷ. If a feature receives a negative weight, then increasing the value of that feature decreases the value of our prediction. If a feature’s weight is large in magnitude, then it has a large effect on the prediction. If a feature’s weight is zero, it has no effect on the prediction. We thus have a definition of our task T : to predict y from x by outputting ŷ = w x. Next we need a definition of our performance measure, . P Suppose that we have a design matrix of m example inputs that we will not use for training, only for evaluating how well the model performs. We also have a vector of regression targets providing the correct value of y for each of these examples. Because this dataset will only be used for evaluation, we call it the test set. We refer to the design matrix of inputs as X( ) test and the vector of regression targets as y( ) test . One way of measuring the performance of the model is to compute the mean squared error of the model on the test set. If ŷ( ) test gives the predictions of the model on the test set, then the mean squared error is given by MSEtest = 1 m  i (ŷ( ) test − y( ) test )2 i . (5.4) Intuitively, one can see that this error measure decreases to 0 when ŷ( ) test = y( ) test . We can also see that MSEtest = 1 m ||ŷ( ) test − y( ) test ||2 2 , (5.5) so the error increases whenever the Euclidean distance between the predictions and the targets increases. To make a machine learning algorithm, we need to design an algorithm that will improve the weights w in a way that reduces MSEtest when the algorithm is allowed to gain experience by observing a training set (X( ) train , y( ) train ). One intuitive way of doing this (which we will justify later, in section ) is just to 5.5.1 minimize the mean squared error on the training set, MSEtrain. To minimize MSEtrain, we can simply solve for where its gradient is : 0 ∇wMSEtrain = 0 (5.6) ⇒ ∇w 1 m ||ŷ( ) train − y( ) train ||2 2 = 0 (5.7) ⇒ 1 m ∇w||X( ) train w y − ( ) train ||2 2 = 0 (5.8) 108
  • 125. CHAPTER 5. MACHINE LEARNING BASICS − − 1 0 . 0 5 0 0 0 5 1 0 . . . . x1 −3 −2 −1 0 1 2 3 y Linear regression example 0 5 1 0 1 5 . . . w1 0 20 . 0 25 . 0 30 . 0 35 . 0 40 . 0 45 . 0 50 . 0 55 . MSE (train) Optimization of w Figure 5.1: A linear regression problem, with a training set consisting of ten data points, each containing one feature. Because there is only one feature, the weight vector w contains only a single parameter to learn,w1. (Left)Observe that linear regression learns to set w1 such that the line y = w1 x comes as close as possible to passing through all the training points. The plotted point indicates the value of (Right) w1 found by the normal equations, which we can see minimizes the mean squared error on the training set. ⇒ ∇w  X ( ) train w y − ( ) train   X( ) train w y − ( ) train  = 0 (5.9) ⇒ ∇w  w X ( ) train  X( ) train w w − 2  X( ) train  y ( ) train + y( ) train  y( ) train  = 0 (5.10) ⇒ 2X( ) train  X( ) train w X − 2 ( ) train  y( ) train = 0 (5.11) ⇒ w =  X( ) train  X( ) train −1 X( ) train  y( ) train (5.12) The system of equations whose solution is given by equation is known as 5.12 the normal equations. Evaluating equation constitutes a simple learning 5.12 algorithm. For an example of the linear regression learning algorithm in action, see figure . 5.1 It is worth noting that the term linear regression is often used to refer to a slightly more sophisticated model with one additional parameter—an intercept term . In this model b ŷ = w x + b (5.13) so the mapping from parameters to predictions is still a linear function but the mapping from features to predictions is now an affine function. This extension to affine functions means that the plot of the model’s predictions still looks like a line, but it need not pass through the origin. Instead of adding the bias parameter 109
  • 126. CHAPTER 5. MACHINE LEARNING BASICS b, one can continue to use the model with only weights but augment x with an extra entry that is always set to . The weight corresponding to the extra entry 1 1 plays the role of the bias parameter. We will frequently use the term “linear” when referring to affine functions throughout this book. The intercept term b is often called the bias parameter of the affine transfor- mation. This terminology derives from the point of view that the output of the transformation is biased toward being b in the absence of any input. This term is different from the idea of a statistical bias, in which a statistical estimation algorithm’s expected estimate of a quantity is not equal to the true quantity. Linear regression is of course an extremely simple and limited learning algorithm, but it provides an example of how a learning algorithm can work. In the subsequent sections we will describe some of the basic principles underlying learning algorithm design and demonstrate how these principles can be used to build more complicated learning algorithms. 5.2 Capacity, Overfitting and Underfitting The central challenge in machine learning is that we must perform well on new, previously unseen inputs—not just those on which our model was trained. The ability to perform well on previously unobserved inputs is called generalization. Typically, when training a machine learning model, we have access to a training set, we can compute some error measure on the training set called the training error, and we reduce this training error. So far, what we have described is simply an optimization problem. What separates machine learning from optimization is that we want the generalization error, also called the test error, to be low as well. The generalization error is defined as the expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice. We typically estimate the generalization error of a machine learning model by measuring its performance on a test set of examples that were collected separately from the training set. In our linear regression example, we trained the model by minimizing the training error, 1 m( ) train ||X( ) train w y − ( ) train ||2 2, (5.14) but we actually care about the test error, 1 m( ) test ||X( ) test w y − ( ) test ||2 2. How can we affect performance on the test set when we get to observe only the 110
  • 127. CHAPTER 5. MACHINE LEARNING BASICS training set? The field of statistical learning theory provides some answers. If the training and the test set are collected arbitrarily, there is indeed little we can do. If we are allowed to make some assumptions about how the training and test set are collected, then we can make some progress. The train and test data are generated by a probability distribution over datasets called the data generating process. We typically make a set of assumptions known collectively as the i.i.d. assumptions. These assumptions are that the examples in each dataset are independent from each other, and that the train set and test set are identically distributed, drawn from the same probability distribution as each other. This assumption allows us to describe the data gen- erating process with a probability distribution over a single example. The same distribution is then used to generate every train example and every test example. We call that shared underlying distribution the data generating distribution, denoted pdata. This probabilistic framework and the i.i.d. assumptions allow us to mathematically study the relationship between training error and test error. One immediate connection we can observe between the training and test error is that the expected training error of a randomly selected model is equal to the expected test error of that model. Suppose we have a probability distribution p(x, y) and we sample from it repeatedly to generate the train set and the test set. For some fixed value w, the expected training set error is exactly the same as the expected test set error, because both expectations are formed using the same dataset sampling process. The only difference between the two conditions is the name we assign to the dataset we sample. Of course, when we use a machine learning algorithm, we do not fix the parameters ahead of time, then sample both datasets. We sample the training set, then use it to choose the parameters to reduce training set error, then sample the test set. Under this process, the expected test error is greater than or equal to the expected value of training error. The factors determining how well a machine learning algorithm will perform are its ability to: 1. Make the training error small. 2. Make the gap between training and test error small. These two factors correspond to the two central challenges in machine learning: underfitting and overfitting. Underfitting occurs when the model is not able to obtain a sufficiently low error value on the training set. Overfitting occurs when the gap between the training error and test error is too large. We can control whether a model is more likely to overfit or underfit by altering its capacity. Informally, a model’s capacity is its ability to fit a wide variety of 111
  • 128. CHAPTER 5. MACHINE LEARNING BASICS functions. Models with low capacity may struggle to fit the training set. Models with high capacity can overfit by memorizing properties of the training set that do not serve them well on the test set. One way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to select as being the solution. For example, the linear regression algorithm has the set of all linear functions of its input as its hypothesis space. We can generalize linear regression to include polynomials, rather than just linear functions, in its hypothesis space. Doing so increases the model’s capacity. A polynomial of degree one gives us the linear regression model with which we are already familiar, with prediction ŷ b wx. = + (5.15) By introducing x2 as another feature provided to the linear regression model, we can learn a model that is quadratic as a function of : x ŷ b w = + 1x w + 2x2 . (5.16) Though this model implements a quadratic function of its , the output is input still a linear function of the parameters, so we can still use the normal equations to train the model in closed form. We can continue to add more powers of x as additional features, for example to obtain a polynomial of degree 9: ŷ b = + 9  i=1 wixi . (5.17) Machine learning algorithms will generally perform best when their capacity is appropriate for the true complexity of the task they need to perform and the amount of training data they are provided with. Models with insufficient capacity are unable to solve complex tasks. Models with high capacity can solve complex tasks, but when their capacity is higher than needed to solve the present task they may overfit. Figure shows this principle in action. We compare a linear, quadratic 5.2 and degree-9 predictor attempting to fit a problem where the true underlying function is quadratic. The linear function is unable to capture the curvature in the true underlying problem, so it underfits. The degree-9 predictor is capable of representing the correct function, but it is also capable of representing infinitely many other functions that pass exactly through the training points, because we 112
  • 129. CHAPTER 5. MACHINE LEARNING BASICS have more parameters than training examples. We have little chance of choosing a solution that generalizes well when so many wildly different solutions exist. In this example, the quadratic model is perfectly matched to the true structure of the task so it generalizes well to new data.          Figure 5.2: We fit three models to this example training set. The training data was generated synthetically, by randomly sampling x values and choosing y deterministically by evaluating a quadratic function. (Left)A linear function fit to the data suffers from underfitting—it cannot capture the curvature that is present in the data. A (Center) quadratic function fit to the data generalizes well to unseen points. It does not suffer from a significant amount of overfitting or underfitting. A polynomial of degree 9 fit to (Right) the data suffers from overfitting. Here we used the Moore-Penrose pseudoinverse to solve the underdetermined normal equations. The solution passes through all of the training points exactly, but we have not been lucky enough for it to extract the correct structure. It now has a deep valley in between two training points that does not appear in the true underlying function. It also increases sharply on the left side of the data, while the true function decreases in this area. So far we have described only one way of changing a model’s capacity: by changing the number of input features it has, and simultaneously adding new parameters associated with those features. There are in fact many ways of changing a model’s capacity. Capacity is not determined only by the choice of model. The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. This is called the representational capacity of the model. In many cases, finding the best function within this family is a very difficult optimization problem. In practice, the learning algorithm does not actually find the best function, but merely one that significantly reduces the training error. These additional limitations, such as 113
  • 130. CHAPTER 5. MACHINE LEARNING BASICS the imperfection of the optimization algorithm, mean that the learning algorithm’s effective capacity may be less than the representational capacity of the model family. Our modern ideas about improving the generalization of machine learning models are refinements of thought dating back to philosophers at least as early as Ptolemy. Many early scholars invoke a principle of parsimony that is now most widely known as Occam’s razor (c. 1287-1347). This principle states that among competing hypotheses that explain known observations equally well, one should choose the “simplest” one. This idea was formalized and made more precise in the 20th century by the founders of statistical learning theory (Vapnik and Chervonenkis 1971 Vapnik 1982 Blumer 1989 Vapnik 1995 , ; , ; et al., ; , ). Statistical learning theory provides various means of quantifying model capacity. Among these, the most well-known is the Vapnik-Chervonenkis dimension, or VC dimension. The VC dimension measures the capacity of a binary classifier. The VC dimension is defined as being the largest possible value of m for which there exists a training set of m different x points that the classifier can label arbitrarily. Quantifying the capacity of the model allows statistical learning theory to make quantitative predictions. The most important results in statistical learning theory show that the discrepancy between training error and generalization error is bounded from above by a quantity that grows as the model capacity grows but shrinks as the number of training examples increases (Vapnik and Chervonenkis, 1971 Vapnik 1982 Blumer 1989 Vapnik 1995 ; , ; et al., ; , ). These bounds provide intellectual justification that machine learning algorithms can work, but they are rarely used in practice when working with deep learning algorithms. This is in part because the bounds are often quite loose and in part because it can be quite difficult to determine the capacity of deep learning algorithms. The problem of determining the capacity of a deep learning model is especially difficult because the effective capacity is limited by the capabilities of the optimization algorithm, and we have little theoretical understanding of the very general non-convex optimization problems involved in deep learning. We must remember that while simpler functions are more likely to generalize (to have a small gap between training and test error) we must still choose a sufficiently complex hypothesis to achieve low training error. Typically, training error decreases until it asymptotes to the minimum possible error value as model capacity increases (assuming the error measure has a minimum value). Typically, generalization error has a U-shaped curve as a function of model capacity. This is illustrated in figure . 5.3 To reach the most extreme case of arbitrarily high capacity, we introduce 114
  • 131. CHAPTER 5. MACHINE LEARNING BASICS 0 Optimal Capacity Capacity Error Underfitting zone Overfitting zone Generalization gap Training error Generalization error Figure 5.3: Typical relationship between capacity and error. Training and test error behave differently. At the left end of the graph, training error and generalization error are both high. This is the underfitting regime. As we increase capacity, training error decreases, but the gap between training and generalization error increases. Eventually, the size of this gap outweighs the decrease in training error, and we enter theoverfitting regime, where capacity is too large, above the optimal capacity. the concept of non-parametric models. So far, we have seen only parametric models, such as linear regression. Parametric models learn a function described by a parameter vector whose size is finite and fixed before any data is observed. Non-parametric models have no such limitation. Sometimes, non-parametric models are just theoretical abstractions (such as an algorithm that searches over all possible probability distributions) that cannot be implemented in practice. However, we can also design practical non-parametric models by making their complexity a function of the training set size. One example of such an algorithm is nearest neighbor regression. Unlike linear regression, which has a fixed-length vector of weights, the nearest neighbor regression model simply stores the X and y from the training set. When asked to classify a test point x, the model looks up the nearest entry in the training set and returns the associated regression target. In other words, ŷ = yi where i = arg min ||Xi,: − || x 2 2. The algorithm can also be generalized to distance metrics other than the L2 norm, such as learned distance metrics ( , ). If the algorithm is Goldberger et al. 2005 allowed to break ties by averaging the yi values for all Xi,: that are tied for nearest, then this algorithm is able to achieve the minimum possible training error (which might be greater than zero, if two identical inputs are associated with different outputs) on any regression dataset. Finally, we can also create a non-parametric learning algorithm by wrapping a 115
  • 132. CHAPTER 5. MACHINE LEARNING BASICS parametric learning algorithm inside another algorithm that increases the number of parameters as needed. For example, we could imagine an outer loop of learning that changes the degree of the polynomial learned by linear regression on top of a polynomial expansion of the input. The ideal model is an oracle that simply knows the true probability distribution that generates the data. Even such a model will still incur some error on many problems, because there may still be some noise in the distribution. In the case of supervised learning, the mapping from x to y may be inherently stochastic, or y may be a deterministic function that involves other variables besides those included in x. The error incurred by an oracle making predictions from the true distribution is called the p , y (x ) Bayes error. Training and generalization error vary as the size of the training set varies. Expected generalization error can never increase as the number of training examples increases. For non-parametric models, more data yields better generalization until the best possible error is achieved. Any fixed parametric model with less than optimal capacity will asymptote to an error value that exceeds the Bayes error. See figure for an illustration. Note that it is possible for the model to have optimal 5.4 capacity and yet still have a large gap between training and generalization error. In this situation, we may be able to reduce this gap by gathering more training examples. 5.2.1 The No Free Lunch Theorem Learning theory claims that a machine learning algorithm can generalize well from a finite training set of examples. This seems to contradict some basic principles of logic. Inductive reasoning, or inferring general rules from a limited set of examples, is not logically valid. To logically infer a rule describing every member of a set, one must have information about every member of that set. In part, machine learning avoids this problem by offering only probabilistic rules, rather than the entirely certain rules used in purely logical reasoning. Machine learning promises to find rules that are probably most correct about members of the set they concern. Unfortunately, even this does not resolve the entire problem. The no free lunch theorem for machine learning (Wolpert 1996 , ) states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average 116
  • 133. CHAPTER 5. MACHINE LEARNING BASICS                                              Figure 5.4: The effect of the training dataset size on the train and test error, as well as on the optimal model capacity. We constructed a synthetic regression problem based on adding a moderate amount of noise to a degree-5 polynomial, generated a single test set, and then generated several different sizes of training set. For each size, we generated 40 different training sets in order to plot error bars showing 95 percent confidence intervals. (Top)The MSE on the training and test set for two different models: a quadratic model, and a model with degree chosen to minimize the test error. Both are fit in closed form. For the quadratic model, the training error increases as the size of the training set increases. This is because larger datasets are harder to fit. Simultaneously, the test error decreases, because fewer incorrect hypotheses are consistent with the training data. The quadratic model does not have enough capacity to solve the task, so its test error asymptotes to a high value. The test error at optimal capacity asymptotes to the Bayes error. The training error can fall below the Bayes error, due to the ability of the training algorithm to memorize specific instances of the training set. As the training size increases to infinity, the training error of any fixed-capacity model (here, the quadratic model) must rise to at least the Bayes error. As the training set size increases, the optimal capacity (Bottom) (shown here as the degree of the optimal polynomial regressor) increases. The optimal capacity plateaus after reaching sufficient complexity to solve the task. 117
  • 134. CHAPTER 5. MACHINE LEARNING BASICS performance (over all possible tasks) as merely predicting that every point belongs to the same class. Fortunately, these results hold only when we average over possible data all generating distributions. If we make assumptions about the kinds of probability distributions we encounter in real-world applications, then we can design learning algorithms that perform well on these distributions. This means that the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the “real world” that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about. 5.2.2 Regularization The no free lunch theorem implies that we must design our machine learning algorithms to perform well on a specific task. We do so by building a set of preferences into the learning algorithm. When these preferences are aligned with the learning problems we ask the algorithm to solve, it performs better. So far, the only method of modifying a learning algorithm that we have discussed concretely is to increase or decrease the model’s representational capacity by adding or removing functions from the hypothesis space of solutions the learning algorithm is able to choose. We gave the specific example of increasing or decreasing the degree of a polynomial for a regression problem. The view we have described so far is oversimplified. The behavior of our algorithm is strongly affected not just by how large we make the set of functions allowed in its hypothesis space, but by the specific identity of those functions. The learning algorithm we have studied so far, linear regression, has a hypothesis space consisting of the set of linear functions of its input. These linear functions can be very useful for problems where the relationship between inputs and outputs truly is close to linear. They are less useful for problems that behave in a very nonlinear fashion. For example, linear regression would not perform very well if we tried to use it to predict sin(x) from x. We can thus control the performance of our algorithms by choosing what kind of functions we allow them to draw solutions from, as well as by controlling the amount of these functions. We can also give a learning algorithm a preference for one solution in its hypothesis space to another. This means that both functions are eligible, but one is preferred. The unpreferred solution will be chosen only if it fits the training 118
  • 135. CHAPTER 5. MACHINE LEARNING BASICS data significantly better than the preferred solution. For example, we can modify the training criterion for linear regression to include weight decay. To perform linear regression with weight decay, we minimize a sum comprising both the mean squared error on the training and a criterion J (w) that expresses a preference for the weights to have smaller squaredL2 norm. Specifically, J( ) = w MSEtrain + λw w, (5.18) where λ is a value chosen ahead of time that controls the strength of our preference for smaller weights. When λ = 0, we impose no preference, and larger λ forces the weights to become smaller. Minimizing J(w) results in a choice of weights that make a tradeoff between fitting the training data and being small. This gives us solutions that have a smaller slope, or put weight on fewer of the features. As an example of how we can control a model’s tendency to overfit or underfit via weight decay, we can train a high-degree polynomial regression model with different values of . See figure for the results. λ 5.5                 Figure 5.5: We fit a high-degree polynomial regression model to our example training set from figure . The true function is quadratic, but here we use only models with degree 9. 5.2 We vary the amount of weight decay to prevent these high-degree models from overfitting. (Left)With very large λ, we can force the model to learn a function with no slope at all. This underfits because it can only represent a constant function. With a (Center) medium value of , the learning algorithm recovers a curve with the right general shape. λ Even though the model is capable of representing functions with much more complicated shape, weight decay has encouraged it to use a simpler function described by smaller coefficients. With weight decay approaching zero (i.e., using the Moore-Penrose (Right) pseudoinverse to solve the underdetermined problem with minimal regularization), the degree-9 polynomial overfits significantly, as we saw in figure . 5.2 119
  • 136. CHAPTER 5. MACHINE LEARNING BASICS More generally, we can regularize a model that learns a function f(x; θ) by adding a penalty called a regularizer to the cost function. In the case of weight decay, the regularizer is Ω(w) =w w. In chapter , we will see that many other 7 regularizers are possible. Expressing preferences for one function over another is a more general way of controlling a model’s capacity than including or excluding members from the hypothesis space. We can think of excluding a function from a hypothesis space as expressing an infinitely strong preference against that function. In our weight decay example, we expressed our preference for linear functions defined with smaller weights explicitly, via an extra term in the criterion we minimize. There are many other ways of expressing preferences for different solutions, both implicitly and explicitly. Together, these different approaches are known as regularization. Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error. Regularization is one of the central concerns of the field of machine learning, rivaled in its importance only by optimization. The no free lunch theorem has made it clear that there is no best machine learning algorithm, and, in particular, no best form of regularization. Instead we must choose a form of regularization that is well-suited to the particular task we want to solve. The philosophy of deep learning in general and this book in particular is that a very wide range of tasks (such as all of the intellectual tasks that people can do) may all be solved effectively using very general-purpose forms of regularization. 5.3 Hyperparameters and Validation Sets Most machine learning algorithms have several settings that we can use to control the behavior of the learning algorithm. These settings are called hyperparame- ters. The values of hyperparameters are not adapted by the learning algorithm itself (though we can design a nested learning procedure where one learning algorithm learns the best hyperparameters for another learning algorithm). In the polynomial regression example we saw in figure , there is a single 5.2 hyperparameter: the degree of the polynomial, which acts as a capacity hyper- parameter. The λ value used to control the strength of weight decay is another example of a hyperparameter. Sometimes a setting is chosen to be a hyperparameter that the learning al- gorithm does not learn because it is difficult to optimize. More frequently, the 120
  • 137. CHAPTER 5. MACHINE LEARNING BASICS setting must be a hyperparameter because it is not appropriate to learn that hyperparameter on the training set. This applies to all hyperparameters that control model capacity. If learned on the training set, such hyperparameters would always choose the maximum possible model capacity, resulting in overfitting (refer to figure ). For example, we can always fit the training set better with a higher 5.3 degree polynomial and a weight decay setting of λ = 0 than we could with a lower degree polynomial and a positive weight decay setting. To solve this problem, we need a validation set of examples that the training algorithm does not observe. Earlier we discussed how a held-out test set, composed of examples coming from the same distribution as the training set, can be used to estimate the generalization error of a learner, after the learning process has completed. It is important that the test examples are not used in any way to make choices about the model, including its hyperparameters. For this reason, no example from the test set can be used in the validation set. Therefore, we always construct the validation set from the training data. Specifically, we split the training data into two disjoint subsets. One of these subsets is used to learn the parameters. The other subset is our validation set, used to estimate the generalization error during or after training, allowing for the hyperparameters to be updated accordingly. The subset of data used to learn the parameters is still typically called the training set, even though this may be confused with the larger pool of data used for the entire training process. The subset of data used to guide the selection of hyperparameters is called the validation set. Typically, one uses about 80% of the training data for training and 20% for validation. Since the validation set is used to “train” the hyperparameters, the validation set error will underestimate the generalization error, though typically by a smaller amount than the training error. After all hyperparameter optimization is complete, the generalization error may be estimated using the test set. In practice, when the same test set has been used repeatedly to evaluate performance of different algorithms over many years, and especially if we consider all the attempts from the scientific community at beating the reported state-of- the-art performance on that test set, we end up having optimistic evaluations with the test set as well. Benchmarks can thus become stale and then do not reflect the true field performance of a trained system. Thankfully, the community tends to move on to new (and usually more ambitious and larger) benchmark datasets. 121
  • 138. CHAPTER 5. MACHINE LEARNING BASICS 5.3.1 Cross-Validation Dividing the dataset into a fixed training set and a fixed test set can be problematic if it results in the test set being small. A small test set implies statistical uncertainty around the estimated average test error, making it difficult to claim that algorithm A works better than algorithm on the given task. B When the dataset has hundreds of thousands of examples or more, this is not a serious issue. When the dataset is too small, are alternative procedures enable one to use all of the examples in the estimation of the mean test error, at the price of increased computational cost. These procedures are based on the idea of repeating the training and testing computation on different randomly chosen subsets or splits of the original dataset. The most common of these is the k-fold cross-validation procedure, shown in algorithm , in which a partition of the dataset is formed by 5.1 splitting it into k non-overlapping subsets. The test error may then be estimated by taking the average test error across k trials. On trial i, the i-th subset of the data is used as the test set and the rest of the data is used as the training set. One problem is that there exist no unbiased estimators of the variance of such average error estimators (Bengio and Grandvalet 2004 , ), but approximations are typically used. 5.4 Estimators, Bias and Variance The field of statistics gives us many tools that can be used to achieve the machine learning goal of solving a task not only on the training set but also to generalize. Foundational concepts such as parameter estimation, bias and variance are useful to formally characterize notions of generalization, underfitting and overfitting. 5.4.1 Point Estimation Point estimation is the attempt to provide the single “best” prediction of some quantity of interest. In general the quantity of interest can be a single parameter or a vector of parameters in some parametric model, such as the weights in our linear regression example in section , but it can also be a whole function. 5.1.4 In order to distinguish estimates of parameters from their true value, our convention will be to denote a point estimate of a parameter by θ θ̂. Let {x(1) , . . . , x( ) m } be a set of m independent and identically distributed 122
  • 139. CHAPTER 5. MACHINE LEARNING BASICS Algorithm 5.1 The k-fold cross-validation algorithm. It can be used to estimate generalization error of a learning algorithm A when the given dataset D is too small for a simple train/test or train/valid split to yield accurate estimation of generalization error, because the mean of a loss L on a small test set may have too high variance. The dataset D contains as elements the abstract examples z( ) i (for the i-th example), which could stand for an (input,target) pair z( ) i = (x( ) i , y( ) i ) in the case of supervised learning, or for just an input z( ) i = x( ) i in the case of unsupervised learning. The algorithm returns the vector of errors e for each example in D, whose mean is the estimated generalization error. The errors on individual examples can be used to compute a confidence interval around the mean (equation ). While these confidence intervals are not well-justified after the 5.47 use of cross-validation, it is still common practice to use them to declare that algorithm A is better than algorithm B only if the confidence interval of the error of algorithm A lies below and does not intersect the confidence interval of algorithm B. Define KFoldXV( ): D, A, L, k Require: D, the given dataset, with elements z( ) i Require: A, the learning algorithm, seen as a function that takes a dataset as input and outputs a learned function Require: L, the loss function, seen as a function from a learned function f and an example z( ) i ∈ ∈ D to a scalar R Require: k, the number of folds Split into mutually exclusive subsets D k Di, whose union is . D for do i k from to 1 fi = ( A D D i) for z( ) j in Di do ej = ( L fi , z( ) j ) end for end for Return e 123
  • 140. CHAPTER 5. MACHINE LEARNING BASICS (i.i.d.) data points. A or is any function of the data: point estimator statistic θ̂m = ( g x(1) , . . . , x( ) m ). (5.19) The definition does not require that g return a value that is close to the true θ or even that the range of g is the same as the set of allowable values of θ. This definition of a point estimator is very general and allows the designer of an estimator great flexibility. While almost any function thus qualifies as an estimator, a good estimator is a function whose output is close to the true underlying θ that generated the training data. For now, we take the frequentist perspective on statistics. That is, we assume that the true parameter value θ is fixed but unknown, while the point estimate θ̂ is a function of the data. Since the data is drawn from a random process, any function of the data is random. Therefore θ̂ is a random variable. Point estimation can also refer to the estimation of the relationship between input and target variables. We refer to these types of point estimates as function estimators. Function Estimation As we mentioned above, sometimes we are interested in performing function estimation (or function approximation). Here we are trying to predict a variable y given an input vector x. We assume that there is a function f(x) that describes the approximate relationship between y and x. For example, we may assume that y = f(x) + , where  stands for the part of y that is not predictable from x. In function estimation, we are interested in approximating f with a model or estimate ˆ f. Function estimation is really just the same as estimating a parameter θ; the function estimator ˆ f is simply a point estimator in function space. The linear regression example (discussed above in section ) and 5.1.4 the polynomial regression example (discussed in section ) are both examples of 5.2 scenarios that may be interpreted either as estimating a parameter w or estimating a function ˆ f y mapping from to x . We now review the most commonly studied properties of point estimators and discuss what they tell us about these estimators. 5.4.2 Bias The bias of an estimator is defined as: bias(θ̂m) = ( E θ̂m) − θ (5.20) 124
  • 141. CHAPTER 5. MACHINE LEARNING BASICS where the expectation is over the data (seen as samples from a random variable) and θ is the true underlying value of θ used to define the data generating distri- bution. An estimator θ̂m is said to be unbiased if bias(θ̂m) = 0, which implies that E(θ̂m) = θ. An estimator ˆ θm is said to be asymptotically unbiased if limm→∞ bias(θ̂m) = 0, which implies that limm→∞ E(ˆ θm) = θ. Example: Bernoulli Distribution Consider a set of samples {x(1) , . . . , x( ) m } that are independently and identically distributed according to a Bernoulli distri- bution with mean : θ P x ( ( ) i ; ) = θ θx ( ) i (1 ) − θ (1−x( ) i ) . (5.21) A common estimator for the θ parameter of this distribution is the mean of the training samples: θ̂m = 1 m m  i=1 x( ) i . (5.22) To determine whether this estimator is biased, we can substitute equation 5.22 into equation : 5.20 bias(θ̂m) = [ E ˆ θm] − θ (5.23) = E  1 m m  i=1 x( ) i  − θ (5.24) = 1 m m  i=1 E  x( ) i  − θ (5.25) = 1 m m  i=1 1  x( ) i =0  x( ) i θx ( ) i (1 ) − θ (1−x( ) i )  − θ (5.26) = 1 m m  i=1 ( ) θ − θ (5.27) = = 0 θ θ − (5.28) Since bias(θ̂) = 0, we say that our estimator θ̂ is unbiased. Example: Gaussian Distribution Estimator of the Mean Now, consider a set of samples {x(1), . . . , x( ) m } that are independently and identically distributed according to a Gaussian distribution p(x( ) i ) = N (x( ) i ; µ, σ2), where i ∈ {1, . . . , m}. 125
  • 142. CHAPTER 5. MACHINE LEARNING BASICS Recall that the Gaussian probability density function is given by p x ( ( ) i ; µ, σ2 ) = 1 √ 2πσ2 exp  − 1 2 (x( ) i − µ)2 σ2  . (5.29) A common estimator of the Gaussian mean parameter is known as the sample mean: µ̂m = 1 m m  i=1 x( ) i (5.30) To determine the bias of the sample mean, we are again interested in calculating its expectation: bias(µ̂m ) = [ˆ E µm] − µ (5.31) = E  1 m m  i=1 x( ) i  − µ (5.32) =  1 m m  i=1 E  x( ) i   − µ (5.33) =  1 m m  i=1 µ  − µ (5.34) = = 0 µ µ − (5.35) Thus we find that the sample mean is an unbiased estimator of Gaussian mean parameter. Example: Estimators of the Variance of a Gaussian Distribution As an example, we compare two different estimators of the variance parameter σ2 of a Gaussian distribution. We are interested in knowing if either estimator is biased. The first estimator of σ2 we consider is known as the sample variance: σ̂2 m = 1 m m  i=1  x( ) i − µ̂m 2 , (5.36) where µ̂m is the sample mean, defined above. More formally, we are interested in computing bias(σ̂2 m) = [ˆ E σ2 m] − σ2 (5.37) 126
  • 143. CHAPTER 5. MACHINE LEARNING BASICS We begin by evaluating the term E[σ̂2 m ]: E[σ̂2 m] =E  1 m m  i=1  x( ) i − µ̂m 2  (5.38) = m − 1 m σ2 (5.39) Returning to equation , we conclude that the bias of 5.37 σ̂2 m is −σ2/m. Therefore, the sample variance is a biased estimator. The unbiased sample variance estimator σ̃2 m = 1 m − 1 m  i=1  x( ) i − µ̂m 2 (5.40) provides an alternative approach. As the name suggests this estimator is unbiased. That is, we find that E[σ̃2 m] = σ2 : E[σ̃2 m] = E  1 m − 1 m  i=1  x( ) i − µ̂m 2  (5.41) = m m − 1 E[σ̂2 m ] (5.42) = m m − 1  m − 1 m σ2  (5.43) = σ2 . (5.44) We have two estimators: one is biased and the other is not. While unbiased estimators are clearly desirable, they are not always the “best” estimators. As we will see we often use biased estimators that possess other important properties. 5.4.3 Variance and Standard Error Another property of the estimator that we might want to consider is how much we expect it to vary as a function of the data sample. Just as we computed the expectation of the estimator to determine its bias, we can compute its variance. The variance of an estimator is simply the variance Var(ˆ θ) (5.45) where the random variable is the training set. Alternately, the square root of the variance is called the , denoted standard error SE(θ̂). 127
  • 144. CHAPTER 5. MACHINE LEARNING BASICS The variance or the standard error of an estimator provides a measure of how we would expect the estimate we compute from data to vary as we independently resample the dataset from the underlying data generating process. Just as we might like an estimator to exhibit low bias we would also like it to have relatively low variance. When we compute any statistic using a finite number of samples, our estimate of the true underlying parameter is uncertain, in the sense that we could have obtained other samples from the same distribution and their statistics would have been different. The expected degree of variation in any estimator is a source of error that we want to quantify. The standard error of the mean is given by SE(µ̂m) =    Var  1 m m  i=1 x( ) i  = σ √ m , (5.46) where σ2 is the true variance of the samples xi. The standard error is often estimated by using an estimate of σ. Unfortunately, neither the square root of the sample variance nor the square root of the unbiased estimator of the variance provide an unbiased estimate of the standard deviation. Both approaches tend to underestimate the true standard deviation, but are still used in practice. The square root of the unbiased estimator of the variance is less of an underestimate. For large , the approximation is quite reasonable. m The standard error of the mean is very useful in machine learning experiments. We often estimate the generalization error by computing the sample mean of the error on the test set. The number of examples in the test set determines the accuracy of this estimate. Taking advantage of the central limit theorem, which tells us that the mean will be approximately distributed with a normal distribution, we can use the standard error to compute the probability that the true expectation falls in any chosen interval. For example, the 95% confidence interval centered on the mean µ̂m is (µ̂m − 1 96SE(ˆ . µm) ˆ , µm + 1 96SE(ˆ . µm)), (5.47) under the normal distribution with mean µ̂m and variance SE(µ̂m)2. In machine learning experiments, it is common to say that algorithmA is better than algorithm B if the upper bound of the 95% confidence interval for the error of algorithm A is less than the lower bound of the 95% confidence interval for the error of algorithm B. 128
  • 145. CHAPTER 5. MACHINE LEARNING BASICS Example: Bernoulli Distribution We once again consider a set of samples {x(1) , . . . , x( ) m } drawn independently and identically from a Bernoulli distribution (recall P(x( ) i ;θ) = θx( ) i (1 − θ)(1−x( ) i ) ). This time we are interested in computing the variance of the estimator θ̂m = 1 m m i=1 x( ) i . Var  θ̂m  = Var  1 m m  i=1 x( ) i  (5.48) = 1 m2 m  i=1 Var  x( ) i  (5.49) = 1 m2 m  i=1 θ θ (1 − ) (5.50) = 1 m2 mθ θ (1 − ) (5.51) = 1 m θ θ (1 − ) (5.52) The variance of the estimator decreases as a function of m, the number of examples in the dataset. This is a common property of popular estimators that we will return to when we discuss consistency (see section ). 5.4.5 5.4.4 Trading off Bias and Variance to Minimize Mean Squared Error Bias and variance measure two different sources of error in an estimator. Bias measures the expected deviation from the true value of the function or parameter. Variance on the other hand, provides a measure of the deviation from the expected estimator value that any particular sampling of the data is likely to cause. What happens when we are given a choice between two estimators, one with more bias and one with more variance? How do we choose between them? For example, imagine that we are interested in approximating the function shown in figure and we are only offered the choice between a model with large bias and 5.2 one that suffers from large variance. How do we choose between them? The most common way to negotiate this trade-off is to use cross-validation. Empirically, cross-validation is highly successful on many real-world tasks. Alter- natively, we can also compare the mean squared error (MSE) of the estimates: MSE = [( E θ̂m − θ)2 ] (5.53) = Bias(θ̂m)2 + Var(ˆ θm ) (5.54) 129
  • 146. CHAPTER 5. MACHINE LEARNING BASICS The MSE measures the overall expected deviation—in a squared error sense— between the estimator and the true value of the parameter θ. As is clear from equation , evaluating the MSE incorporates both the bias and the variance. 5.54 Desirable estimators are those with small MSE and these are estimators that manage to keep both their bias and variance somewhat in check. Capacity Bias Generalization error Variance Optimal capacity Overfitting zone Underfitting zone Figure 5.6: As capacity increases (x-axis), bias (dotted) tends to decrease and variance (dashed) tends to increase, yielding another U-shaped curve for generalization error (bold curve). If we vary capacity along one axis, there is an optimal capacity, with underfitting when the capacity is below this optimum and overfitting when it is above. This relationship is similar to the relationship between capacity, underfitting, and overfitting, discussed in section and figure . 5.2 5.3 The relationship between bias and variance is tightly linked to the machine learning concepts of capacity, underfitting and overfitting. In the case where gen- eralization error is measured by the MSE (where bias and variance are meaningful components of generalization error), increasing capacity tends to increase variance and decrease bias. This is illustrated in figure , where we see again the U-shaped 5.6 curve of generalization error as a function of capacity. 5.4.5 Consistency So far we have discussed the properties of various estimators for a training set of fixed size. Usually, we are also concerned with the behavior of an estimator as the amount of training data grows. In particular, we usually wish that, as the number of data points m in our dataset increases, our point estimates converge to the true 130
  • 147. CHAPTER 5. MACHINE LEARNING BASICS value of the corresponding parameters. More formally, we would like that plimm→∞θ̂m = θ. (5.55) The symbol plim indicates convergence in probability, meaning that for any  > 0, P(|ˆ θm − | θ > ) → 0 as m → ∞. The condition described by equation is 5.55 known as consistency. It is sometimes referred to as weak consistency, with strong consistency referring to the almost sure convergence of θ̂ to θ. Almost sure convergence of a sequence of random variables x(1) , x(2) , . . . to a value x occurs when p(limm→∞ x( ) m = ) = 1 x . Consistency ensures that the bias induced by the estimator diminishes as the number of data examples grows. However, the reverse is not true—asymptotic unbiasedness does not imply consistency. For example, consider estimating the mean parameter µ of a normal distribution N(x; µ, σ2), with a dataset consisting of m samples: {x(1) , . . . , x( ) m }. We could use the first sample x(1) of the dataset as an unbiased estimator: θ̂ = x(1) . In that case, E(ˆ θm) = θ so the estimator is unbiased no matter how many data points are seen. This, of course, implies that the estimate is asymptotically unbiased. However, this is not a consistent estimator as it is the case that not ˆ θm → → ∞ θ m as . 5.5 Maximum Likelihood Estimation Previously, we have seen some definitions of common estimators and analyzed their properties. But where did these estimators come from? Rather than guessing that some function might make a good estimator and then analyzing its bias and variance, we would like to have some principle from which we can derive specific functions that are good estimators for different models. The most common such principle is the maximum likelihood principle. Consider a set of m examples X = {x(1) , . . . , x( ) m } drawn independently from the true but unknown data generating distribution pdata( ) x . Let pmodel(x;θ) be a parametric family of probability distributions over the same space indexed by θ. In other words, pmodel(x;θ) maps any configuration x to a real number estimating the true probability pdata( ) x . The maximum likelihood estimator for is then defined as θ θML = arg max θ pmodel( ; ) X θ (5.56) = arg max θ m  i=1 pmodel(x( ) i ; ) θ (5.57) 131
  • 148. CHAPTER 5. MACHINE LEARNING BASICS This product over many probabilities can be inconvenient for a variety of reasons. For example, it is prone to numerical underflow. To obtain a more convenient but equivalent optimization problem, we observe that taking the logarithm of the likelihood does not change its arg max but does conveniently transform a product into a sum: θML = arg max θ m  i=1 log pmodel(x( ) i ; ) θ . (5.58) Because the arg max does not change when we rescale the cost function, we can divide by m to obtain a version of the criterion that is expressed as an expectation with respect to the empirical distribution p̂data defined by the training data: θML = arg max θ Ex∼p̂data log pmodel ( ; ) x θ . (5.59) One way to interpret maximum likelihood estimation is to view it as minimizing the dissimilarity between the empirical distribution p̂data defined by the training set and the model distribution, with the degree of dissimilarity between the two measured by the KL divergence. The KL divergence is given by DKL (p̂data pmodel) = Ex∼p̂data [log p̂data ( ) log x − pmodel( )] x . (5.60) The term on the left is a function only of the data generating process, not the model. This means when we train the model to minimize the KL divergence, we need only minimize − Ex∼p̂data [log pmodel( )] x (5.61) which is of course the same as the maximization in equation . 5.59 Minimizing this KL divergence corresponds exactly to minimizing the cross- entropy between the distributions. Many authors use the term “cross-entropy” to identify specifically the negative log-likelihood of a Bernoulli or softmax distribution, but that is a misnomer. Any loss consisting of a negative log-likelihood is a cross- entropy between the empirical distribution defined by the training set and the probability distribution defined by model. For example, mean squared error is the cross-entropy between the empirical distribution and a Gaussian model. We can thus see maximum likelihood as an attempt to make the model dis- tribution match the empirical distribution p̂data. Ideally, we would like to match the true data generating distribution pdata, but we have no direct access to this distribution. While the optimal θ is the same regardless of whether we are maximizing the likelihood or minimizing the KL divergence, the values of the objective functions 132
  • 149. CHAPTER 5. MACHINE LEARNING BASICS are different. In software, we often phrase both as minimizing a cost function. Maximum likelihood thus becomes minimization of the negative log-likelihood (NLL), or equivalently, minimization of the cross entropy. The perspective of maximum likelihood as minimum KL divergence becomes helpful in this case because the KL divergence has a known minimum value of zero. The negative log-likelihood can actually become negative when is real-valued. x 5.5.1 Conditional Log-Likelihood and Mean Squared Error The maximum likelihood estimator can readily be generalized to the case where our goal is to estimate a conditional probability P(y x | ; θ) in order to predict y given x. This is actually the most common situation because it forms the basis for most supervised learning. If X represents all our inputs and Y all our observed targets, then the conditional maximum likelihood estimator is θML = arg max θ P . ( ; ) Y X | θ (5.62) If the examples are assumed to be i.i.d., then this can be decomposed into θML = arg max θ m  i=1 log ( P y( ) i | x( ) i ; ) θ . (5.63) Example: Linear Regression as Maximum Likelihood Linear regression, introduced earlier in section , may be justified as a maximum likelihood 5.1.4 procedure. Previously, we motivated linear regression as an algorithm that learns to take an input x and produce an output value ŷ. The mapping from x to ŷ is chosen to minimize mean squared error, a criterion that we introduced more or less arbitrarily. We now revisit linear regression from the point of view of maximum likelihood estimation. Instead of producing a single prediction ŷ, we now think of the model as producing a conditional distribution p(y | x). We can imagine that with an infinitely large training set, we might see several training examples with the same input value x but different values of y. The goal of the learning algorithm is now to fit the distribution p(y | x) to all of those different y values that are all compatible with x. To derive the same linear regression algorithm we obtained before, we define p(y | x) = N (y;ŷ(x;w), σ2). The function ŷ(x; w) gives the prediction of the mean of the Gaussian. In this example, we assume that the variance is fixed to some constant σ 2 chosen by the user. We will see that this choice of the functional form of p(y | x) causes the maximum likelihood estimation procedure to yield the same learning algorithm as we developed before. Since the 133
  • 150. CHAPTER 5. MACHINE LEARNING BASICS examples are assumed to be i.i.d., the conditional log-likelihood (equation ) is 5.63 given by m  i=1 log ( p y( ) i | x( ) i ; ) θ (5.64) = log − m σ − m 2 log(2 ) π − m  i=1  ŷ( ) i − y( ) i  2 2σ2 , (5.65) where ŷ( ) i is the output of the linear regression on the i-th input x( ) i and m is the number of the training examples. Comparing the log-likelihood with the mean squared error, MSEtrain = 1 m m  i=1 ||ŷ ( ) i − y( ) i ||2 , (5.66) we immediately see that maximizing the log-likelihood with respect to w yields the same estimate of the parameters w as does minimizing the mean squared error. The two criteria have different values but the same location of the optimum. This justifies the use of the MSE as a maximum likelihood estimation procedure. As we will see, the maximum likelihood estimator has several desirable properties. 5.5.2 Properties of Maximum Likelihood The main appeal of the maximum likelihood estimator is that it can be shown to be the best estimator asymptotically, as the number of examples m → ∞, in terms of its rate of convergence as increases. m Under appropriate conditions, the maximum likelihood estimator has the property of consistency (see section above), meaning that as the number 5.4.5 of training examples approaches infinity, the maximum likelihood estimate of a parameter converges to the true value of the parameter. These conditions are: • The true distribution pdata must lie within the model family pmodel(·; θ). Otherwise, no estimator can recover pdata . • The true distribution pdata must correspond to exactly one value of θ. Other- wise, maximum likelihood can recover the correct pdata , but will not be able to determine which value of was used by the data generating processing. θ There are other inductive principles besides the maximum likelihood estima- tor, many of which share the property of being consistent estimators. However, 134
  • 151. CHAPTER 5. MACHINE LEARNING BASICS consistent estimators can differ in their statistic efficiency, meaning that one consistent estimator may obtain lower generalization error for a fixed number of samples m, or equivalently, may require fewer examples to obtain a fixed level of generalization error. Statistical efficiency is typically studied in the parametric case (like in linear regression) where our goal is to estimate the value of a parameter (and assuming it is possible to identify the true parameter), not the value of a function. A way to measure how close we are to the true parameter is by the expected mean squared error, computing the squared difference between the estimated and true parameter values, where the expectation is over m training samples from the data generating distribution. That parametric mean squared error decreases as m increases, and for m large, the Cramér-Rao lower bound ( , ; , ) shows that no Rao 1945 Cramér 1946 consistent estimator has a lower mean squared error than the maximum likelihood estimator. For these reasons (consistency and efficiency), maximum likelihood is often considered the preferred estimator to use for machine learning. When the number of examples is small enough to yield overfitting behavior, regularization strategies such as weight decay may be used to obtain a biased version of maximum likelihood that has less variance when training data is limited. 5.6 Bayesian Statistics So far we have discussed frequentist statistics and approaches based on estimat- ing a single value of θ, then making all predictions thereafter based on that one estimate. Another approach is to consider all possible values of θ when making a prediction. The latter is the domain of Bayesian statistics. As discussed in section , the frequentist perspective is that the true 5.4.1 parameter value θ is fixed but unknown, while the point estimate ˆ θ is a random variable on account of it being a function of the dataset (which is seen as random). The Bayesian perspective on statistics is quite different. The Bayesian uses probability to reflect degrees of certainty of states of knowledge. The dataset is directly observed and so is not random. On the other hand, the true parameter θ is unknown or uncertain and thus is represented as a random variable. Before observing the data, we represent our knowledge of θ using the prior probability distribution, p(θ) (sometimes referred to as simply “the prior”). Generally, the machine learning practitioner selects a prior distribution that is quite broad (i.e. with high entropy) to reflect a high degree of uncertainty in the 135
  • 152. CHAPTER 5. MACHINE LEARNING BASICS value of θ before observing any data. For example, one might assume that a priori θ lies in some finite range or volume, with a uniform distribution. Many priors instead reflect a preference for “simpler” solutions (such as smaller magnitude coefficients, or a function that is closer to being constant). Now consider that we have a set of data samples {x(1), . . . , x( ) m }. We can recover the effect of data on our belief about θ by combining the data likelihood p x ( (1) , . . . , x( ) m | θ) with the prior via Bayes’ rule: p x (θ | (1) , . . . , x( ) m ) = p x ( (1) , . . . , x( ) m | θ θ ) ( p ) p x ( (1), . . . , x( ) m ) (5.67) In the scenarios where Bayesian estimation is typically used, the prior begins as a relatively uniform or Gaussian distribution with high entropy, and the observation of the data usually causes the posterior to lose entropy and concentrate around a few highly likely values of the parameters. Relative to maximum likelihood estimation, Bayesian estimation offers two important differences. First, unlike the maximum likelihood approach that makes predictions using a point estimate of θ, the Bayesian approach is to make predictions using a full distribution over θ. For example, after observing m examples, the predicted distribution over the next data sample, x( +1) m , is given by p x ( ( +1) m | x(1) , . . . , x( ) m ) =  p x ( ( +1) m | | θ θ ) ( p x(1) , . . . , x( ) m ) d . θ (5.68) Here each value of θ with positive probability density contributes to the prediction of the next example, with the contribution weighted by the posterior density itself. After having observed {x(1) , . . . , x( ) m }, if we are still quite uncertain about the value of θ, then this uncertainty is incorporated directly into any predictions we might make. In section , we discussed how the frequentist approach addresses the uncer- 5.4 tainty in a given point estimate of θ by evaluating its variance. The variance of the estimator is an assessment of how the estimate might change with alternative samplings of the observed data. The Bayesian answer to the question of how to deal with the uncertainty in the estimator is to simply integrate over it, which tends to protect well against overfitting. This integral is of course just an application of the laws of probability, making the Bayesian approach simple to justify, while the frequentist machinery for constructing an estimator is based on the rather ad hoc decision to summarize all knowledge contained in the dataset with a single point estimate. The second important difference between the Bayesian approach to estimation and the maximum likelihood approach is due to the contribution of the Bayesian 136
  • 153. CHAPTER 5. MACHINE LEARNING BASICS prior distribution. The prior has an influence by shifting probability mass density towards regions of the parameter space that are preferred . In practice, a priori the prior often expresses a preference for models that are simpler or more smooth. Critics of the Bayesian approach identify the prior as a source of subjective human judgment impacting the predictions. Bayesian methods typically generalize much better when limited training data is available, but typically suffer from high computational cost when the number of training examples is large. Example: Bayesian Linear Regression Here we consider the Bayesian esti- mation approach to learning the linear regression parameters. In linear regression, we learn a linear mapping from an input vector x ∈ Rn to predict the value of a scalar . The prediction is parametrized by the vector y ∈ R w ∈ Rn: ŷ = w x. (5.69) Given a set of m training samples (X( ) train , y( ) train ), we can express the prediction of over the entire training set as: y ŷ( ) train = X( ) train w. (5.70) Expressed as a Gaussian conditional distribution on y( ) train , we have p(y( ) train | X ( ) train , w y ) = ( N ( ) train ; X( ) train w I , ) (5.71) ∝ exp  − 1 2 (y( ) train − X( ) train w) (y( ) train − X( ) train w)  , (5.72) where we follow the standard MSE formulation in assuming that the Gaussian variance on y is one. In what follows, to reduce the notational burden, we refer to (X( ) train , y( ) train ) ( ) as simply X y , . To determine the posterior distribution over the model parameter vector w, we first need to specify a prior distribution. The prior should reflect our naive belief about the value of these parameters. While it is sometimes difficult or unnatural to express our prior beliefs in terms of the parameters of the model, in practice we typically assume a fairly broad distribution expressing a high degree of uncertainty about θ. For real-valued parameters it is common to use a Gaussian as a prior distribution: p( ) = ( ; w N w µ0, Λ0) exp ∝  − 1 2 (w µ − 0) Λ−1 0 (w µ − 0)  , (5.73) 137
  • 154. CHAPTER 5. MACHINE LEARNING BASICS where µ0 and Λ0 are the prior distribution mean vector and covariance matrix respectively.1 With the prior thus specified, we can now proceed in determining the posterior distribution over the model parameters. p , p , p (w X | y) ∝ (y X | w) ( ) w (5.74) ∝ exp  − 1 2 ( ) y Xw −  ( ) y Xw −  exp  − 1 2 (w µ − 0) Λ−1 0 (w µ − 0)  (5.75) ∝ exp  − 1 2  −2y Xw w +  X Xw w +  Λ−1 0 w µ − 2  0 Λ−1 0 w  . (5.76) We now define Λm =  XX + Λ−1 0 −1 and µm = Λm  Xy + Λ−1 0 µ0  . Using these new variables, we find that the posterior may be rewritten as a Gaussian distribution: p , (w X | y) exp ∝  − 1 2 (w µ − m) Λ−1 m (w µ − m) + 1 2 µ mΛ−1 m µm  (5.77) ∝ exp  − 1 2 (w µ − m) Λ−1 m (w µ − m)  . (5.78) All terms that do not include the parameter vector w have been omitted; they are implied by the fact that the distribution must be normalized to integrate to . 1 Equation shows how to normalize a multivariate Gaussian distribution. 3.23 Examining this posterior distribution allows us to gain some intuition for the effect of Bayesian inference. In most situations, we set µ0 to 0. If we set Λ0 = 1 α I, then µm gives the same estimate of w as does frequentist linear regression with a weight decay penalty of αww. One difference is that the Bayesian estimate is undefined if α is set to zero—-we are not allowed to begin the Bayesian learning process with an infinitely wide prior on w. The more important difference is that the Bayesian estimate provides a covariance matrix, showing how likely all the different values of are, rather than providing only the estimate w µm. 5.6.1 Maximum (MAP) Estimation A Posteriori While the most principled approach is to make predictions using the full Bayesian posterior distribution over the parameter θ, it is still often desirable to have a 1 Unless there is a reason to assume a particular covariance structure, we typically assume a diagonal covariance matrix Λ0 = diag(λ0). 138
  • 155. CHAPTER 5. MACHINE LEARNING BASICS single point estimate. One common reason for desiring a point estimate is that most operations involving the Bayesian posterior for most interesting models are intractable, and a point estimate offers a tractable approximation. Rather than simply returning to the maximum likelihood estimate, we can still gain some of the benefit of the Bayesian approach by allowing the prior to influence the choice of the point estimate. One rational way to do this is to choose the maximum a posteriori (MAP) point estimate. The MAP estimate chooses the point of maximal posterior probability (or maximal probability density in the more common case of continuous ): θ θMAP = arg max θ p( ) = arg max θ x | θ log ( ) + log ( ) p x θ | p θ . (5.79) We recognize, above on the right hand side, log p(x θ | ), i.e. the standard log- likelihood term, and , corresponding to the prior distribution. log ( ) p θ As an example, consider a linear regression model with a Gaussian prior on the weights w. If this prior is given by N(w;0, 1 λI2), then the log-prior term in equation is proportional to the familiar 5.79 λw w weight decay penalty, plus a term that does not depend on w and does not affect the learning process. MAP Bayesian inference with a Gaussian prior on the weights thus corresponds to weight decay. As with full Bayesian inference, MAP Bayesian inference has the advantage of leveraging information that is brought by the prior and cannot be found in the training data. This additional information helps to reduce the variance in the MAP point estimate (in comparison to the ML estimate). However, it does so at the price of increased bias. Many regularized estimation strategies, such as maximum likelihood learning regularized with weight decay, can be interpreted as making the MAP approxima- tion to Bayesian inference. This view applies when the regularization consists of adding an extra term to the objective function that corresponds to log p(θ ). Not all regularization penalties correspond to MAP Bayesian inference. For example, some regularizer terms may not be the logarithm of a probability distribution. Other regularization terms depend on the data, which of course a prior probability distribution is not allowed to do. MAP Bayesian inference provides a straightforward way to design complicated yet interpretable regularization terms. For example, a more complicated penalty term can be derived by using a mixture of Gaussians, rather than a single Gaussian distribution, as the prior (Nowlan and Hinton 1992 , ). 139
  • 156. CHAPTER 5. MACHINE LEARNING BASICS 5.7 Supervised Learning Algorithms Recall from section that supervised learning algorithms are, roughly speaking, 5.1.3 learning algorithms that learn to associate some input with some output, given a training set of examples of inputs x and outputs y. In many cases the outputs y may be difficult to collect automatically and must be provided by a human “supervisor,” but the term still applies even when the training set targets were collected automatically. 5.7.1 Probabilistic Supervised Learning Most supervised learning algorithms in this book are based on estimating a probability distribution p(y | x). We can do this simply by using maximum likelihood estimation to find the best parameter vector θ for a parametric family of distributions . p y ( | x θ ; ) We have already seen that linear regression corresponds to the family p y y ( | N x θ ; ) = ( ; θ x I , ). (5.80) We can generalize linear regression to the classification scenario by defining a different family of probability distributions. If we have two classes, class 0 and class 1, then we need only specify the probability of one of these classes. The probability of class 1 determines the probability of class 0, because these two values must add up to 1. The normal distribution over real-valued numbers that we used for linear regression is parametrized in terms of a mean. Any value we supply for this mean is valid. A distribution over a binary variable is slightly more complicated, because its mean must always be between 0 and 1. One way to solve this problem is to use the logistic sigmoid function to squash the output of the linear function into the interval (0, 1) and interpret that value as a probability: p y σ ( = 1 ; ) = | x θ (θ x). (5.81) This approach is known as logistic regression (a somewhat strange name since we use the model for classification rather than regression). In the case of linear regression, we were able to find the optimal weights by solving the normal equations. Logistic regression is somewhat more difficult. There is no closed-form solution for its optimal weights. Instead, we must search for them by maximizing the log-likelihood. We can do this by minimizing the negative log-likelihood (NLL) using gradient descent. 140
  • 157. CHAPTER 5. MACHINE LEARNING BASICS This same strategy can be applied to essentially any supervised learning problem, by writing down a parametric family of conditional probability distributions over the right kind of input and output variables. 5.7.2 Support Vector Machines One of the most influential approaches to supervised learning is the support vector machine ( , ; Boser et al. 1992 Cortes and Vapnik 1995 , ). This model is similar to logistic regression in that it is driven by a linear function w x +b. Unlike logistic regression, the support vector machine does not provide probabilities, but only outputs a class identity. The SVM predicts that the positive class is present when w x + b is positive. Likewise, it predicts that the negative class is present when w x + b is negative. One key innovation associated with support vector machines is the kernel trick. The kernel trick consists of observing that many machine learning algorithms can be written exclusively in terms of dot products between examples. For example, it can be shown that the linear function used by the support vector machine can be re-written as w x + = + b b m  i=1 αix x( ) i (5.82) where x( ) i is a training example and α is a vector of coefficients. Rewriting the learning algorithm this way allows us to replace x by the output of a given feature function φ(x) and the dot product with a function k(x x , ( ) i ) = φ(x)·φ(x( ) i ) called a kernel. The · operator represents an inner product analogous to φ(x)φ(x( ) i ). For some feature spaces, we may not use literally the vector inner product. In some infinite dimensional spaces, we need to use other kinds of inner products, for example, inner products based on integration rather than summation. A complete development of these kinds of inner products is beyond the scope of this book. After replacing dot products with kernel evaluations, we can make predictions using the function f b ( ) = x +  i αik , (x x( ) i ). (5.83) This function is nonlinear with respect to x, but the relationship between φ(x) and f (x) is linear. Also, the relationship between α and f(x) is linear. The kernel-based function is exactly equivalent to preprocessing the data by applying φ( ) x to all inputs, then learning a linear model in the new transformed space. The kernel trick is powerful for two reasons. First, it allows us to learn models that are nonlinear as a function of x using convex optimization techniques that are 141
  • 158. CHAPTER 5. MACHINE LEARNING BASICS guaranteed to converge efficiently. This is possible because we consider φ fixed and optimize only α, i.e., the optimization algorithm can view the decision function as being linear in a different space. Second, the kernel function k often admits an implementation that is significantly more computational efficient than naively constructing two vectors and explicitly taking their dot product. φ( ) x In some cases, φ(x) can even be infinite dimensional, which would result in an infinite computational cost for the naive, explicit approach. In many cases, k(x x ,  ) is a nonlinear, tractable function of x even when φ(x) is intractable. As an example of an infinite-dimensional feature space with a tractable kernel, we construct a feature mapping φ(x) over the non-negative integers x. Suppose that this mapping returns a vector containing x ones followed by infinitely many zeros. We can write a kernel function k(x, x( ) i ) = min(x, x( ) i ) that is exactly equivalent to the corresponding infinite-dimensional dot product. The most commonly used kernel is the Gaussian kernel k , , σ (u v u v ) = ( N − ; 0 2 I) (5.84) where N(x;µ, Σ) is the standard normal density. This kernel is also known as the radial basis function (RBF) kernel, because its value decreases along lines in v space radiating outward from u. The Gaussian kernel corresponds to a dot product in an infinite-dimensional space, but the derivation of this space is less straightforward than in our example of the kernel over the integers. min We can think of the Gaussian kernel as performing a kind of template match- ing. A training example x associated with training label y becomes a template for class y. When a test point x is near x according to Euclidean distance, the Gaussian kernel has a large response, indicating that x is very similar to the x template. The model then puts a large weight on the associated training label y. Overall, the prediction will combine many such training labels weighted by the similarity of the corresponding training examples. Support vector machines are not the only algorithm that can be enhanced using the kernel trick. Many other linear models can be enhanced in this way. The category of algorithms that employ the kernel trick is known as kernel machines or kernel methods ( , ; Williams and Rasmussen 1996 Schölkopf 1999 et al., ). A major drawback to kernel machines is that the cost of evaluating the decision function is linear in the number of training examples, because the i-th example contributes a term αik(x x , ( ) i ) to the decision function. Support vector machines are able to mitigate this by learning an α vector that contains mostly zeros. Classifying a new example then requires evaluating the kernel function only for the training examples that have non-zero αi. These training examples are known 142
  • 159. CHAPTER 5. MACHINE LEARNING BASICS as support vectors. Kernel machines also suffer from a high computational cost of training when the dataset is large. We will revisit this idea in section . Kernel machines with 5.9 generic kernels struggle to generalize well. We will explain why in section . The 5.11 modern incarnation of deep learning was designed to overcome these limitations of kernel machines. The current deep learning renaissance began when Hinton et al. ( ) demonstrated that a neural network could outperform the RBF kernel SVM 2006 on the MNIST benchmark. 5.7.3 Other Simple Supervised Learning Algorithms We have already briefly encountered another non-probabilistic supervised learning algorithm, nearest neighbor regression. More generally, k-nearest neighbors is a family of techniques that can be used for classification or regression. As a non-parametric learning algorithm, k-nearest neighbors is not restricted to a fixed number of parameters. We usually think of the k-nearest neighbors algorithm as not having any parameters, but rather implementing a simple function of the training data. In fact, there is not even really a training stage or learning process. Instead, at test time, when we want to produce an output y for a new test input x, we find the k-nearest neighbors to x in the training data X. We then return the average of the corresponding y values in the training set. This works for essentially any kind of supervised learning where we can define an average over y values. In the case of classification, we can average over one-hot code vectors c with cy = 1 and ci = 0 for all other values of i. We can then interpret the average over these one-hot codes as giving a probability distribution over classes. As a non-parametric learning algorithm, k-nearest neighbor can achieve very high capacity. For example, suppose we have a multiclass classification task and measure performance with 0-1 loss. In this setting, -nearest neighbor converges to double the Bayes error as the 1 number of training examples approaches infinity. The error in excess of the Bayes error results from choosing a single neighbor by breaking ties between equally distant neighbors randomly. When there is infinite training data, all test points x will have infinitely many training set neighbors at distance zero. If we allow the algorithm to use all of these neighbors to vote, rather than randomly choosing one of them, the procedure converges to the Bayes error rate. The high capacity of k-nearest neighbors allows it to obtain high accuracy given a large training set. However, it does so at high computational cost, and it may generalize very badly given a small, finite training set. One weakness of k-nearest neighbors is that it cannot learn that one feature is more discriminative than another. For example, imagine we have a regression task with x ∈ R100 drawn from an isotropic Gaussian 143
  • 160. CHAPTER 5. MACHINE LEARNING BASICS distribution, but only a single variable x1 is relevant to the output. Suppose further that this feature simply encodes the output directly, i.e. that y = x1 in all cases. Nearest neighbor regression will not be able to detect this simple pattern. The nearest neighbor of most points x will be determined by the large number of features x2 through x100, not by the lone feature x1 . Thus the output on small training sets will essentially be random. 144
  • 161. CHAPTER 5. MACHINE LEARNING BASICS 0 1 01 111 0 1 011 1111 1110 110 10 010 00 1110 1111 110 10 01 00 010 011 11 111 11 Figure 5.7: Diagrams describing how a decision tree works. (Top)Each node of the tree chooses to send the input example to the child node on the left (0) or or the child node on the right (1). Internal nodes are drawn as circles and leaf nodes as squares. Each node is displayed with a binary string identifier corresponding to its position in the tree, obtained by appending a bit to its parent identifier (0=choose left or top, 1=choose right or bottom). (Bottom)The tree divides space into regions. The 2D plane shows how a decision tree might divide R2. The nodes of the tree are plotted in this plane, with each internal node drawn along the dividing line it uses to categorize examples, and leaf nodes drawn in the center of the region of examples they receive. The result is a piecewise-constant function, with one piece per leaf. Each leaf requires at least one training example to define, so it is not possible for the decision tree to learn a function that has more local maxima than the number of training examples. 145
  • 162. CHAPTER 5. MACHINE LEARNING BASICS Another type of learning algorithm that also breaks the input space into regions and has separate parameters for each region is the decision tree ( , Breiman et al. 1984) and its many variants. As shown in figure , each node of the decision 5.7 tree is associated with a region in the input space, and internal nodes break that region into one sub-region for each child of the node (typically using an axis-aligned cut). Space is thus sub-divided into non-overlapping regions, with a one-to-one correspondence between leaf nodes and input regions. Each leaf node usually maps every point in its input region to the same output. Decision trees are usually trained with specialized algorithms that are beyond the scope of this book. The learning algorithm can be considered non-parametric if it is allowed to learn a tree of arbitrary size, though decision trees are usually regularized with size constraints that turn them into parametric models in practice. Decision trees as they are typically used, with axis-aligned splits and constant outputs within each node, struggle to solve some problems that are easy even for logistic regression. For example, if we have a two-class problem and the positive class occurs wherever x2 > x1, the decision boundary is not axis-aligned. The decision tree will thus need to approximate the decision boundary with many nodes, implementing a step function that constantly walks back and forth across the true decision function with axis-aligned steps. As we have seen, nearest neighbor predictors and decision trees have many limitations. Nonetheless, they are useful learning algorithms when computational resources are constrained. We can also build intuition for more sophisticated learning algorithms by thinking about the similarities and differences between sophisticated algorithms and -NN or decision tree baselines. k See ( ), ( ), ( ) or other machine Murphy 2012 Bishop 2006 Hastie et al. 2001 learning textbooks for more material on traditional supervised learning algorithms. 5.8 Unsupervised Learning Algorithms Recall from section that unsupervised algorithms are those that experience 5.1.3 only “features” but not a supervision signal. The distinction between supervised and unsupervised algorithms is not formally and rigidly defined because there is no objective test for distinguishing whether a value is a feature or a target provided by a supervisor. Informally, unsupervised learning refers to most attempts to extract information from a distribution that do not require human labor to annotate examples. The term is usually associated with density estimation, learning to draw samples from a distribution, learning to denoise data from some distribution, finding a manifold that the data lies near, or clustering the data into groups of 146
  • 163. CHAPTER 5. MACHINE LEARNING BASICS related examples. A classic unsupervised learning task is to find the “best” representation of the data. By ‘best’ we can mean different things, but generally speaking we are looking for a representation that preserves as much information about x as possible while obeying some penalty or constraint aimed at keeping the representation or simpler more accessible than itself. x There are multiple ways of defining a representation. Three of the simpler most common include lower dimensional representations, sparse representations and independent representations. Low-dimensional representations attempt to compress as much information about x as possible in a smaller representation. Sparse representations ( , ; , ; Barlow 1989 Olshausen and Field 1996 Hinton and Ghahramani 1997 , ) embed the dataset into a representation whose entries are mostly zeroes for most inputs. The use of sparse representations typically requires increasing the dimensionality of the representation, so that the representation becoming mostly zeroes does not discard too much information. This results in an overall structure of the representation that tends to distribute data along the axes of the representation space. Independent representations attempt to disentangle the sources of variation underlying the data distribution such that the dimensions of the representation are statistically independent. Of course these three criteria are certainly not mutually exclusive. Low- dimensional representations often yield elements that have fewer or weaker de- pendencies than the original high-dimensional data. This is because one way to reduce the size of a representation is to find and remove redundancies. Identifying and removing more redundancy allows the dimensionality reduction algorithm to achieve more compression while discarding less information. The notion of representation is one of the central themes of deep learning and therefore one of the central themes in this book. In this section, we develop some simple examples of representation learning algorithms. Together, these example algorithms show how to operationalize all three of the criteria above. Most of the remaining chapters introduce additional representation learning algorithms that develop these criteria in different ways or introduce other criteria. 5.8.1 Principal Components Analysis In section , we saw that the principal components analysis algorithm provides 2.12 a means of compressing data. We can also view PCA as an unsupervised learning algorithm that learns a representation of data. This representation is based on two of the criteria for a simple representation described above. PCA learns a 147
  • 164. CHAPTER 5. MACHINE LEARNING BASICS − − 20 10 0 10 20 x1 −20 −10 0 10 20 x 2 − − 20 10 0 10 20 z1 −20 −10 0 10 20 z 2 Figure 5.8: PCA learns a linear projection that aligns the direction of greatest variance with the axes of the new space. (Left)The original data consists of samples ofx. In this space, the variance might occur along directions that are not axis-aligned. (Right)The transformed data z= xW now varies most along the axis z1. The direction of second most variance is now along z2. representation that has lower dimensionality than the original input. It also learns a representation whose elements have no linear correlation with each other. This is a first step toward the criterion of learning representations whose elements are statistically independent. To achieve full independence, a representation learning algorithm must also remove the nonlinear relationships between variables. PCA learns an orthogonal, linear transformation of the data that projects an input x to a representation z as shown in figure . In section , we saw that 5.8 2.12 we could learn a one-dimensional representation that best reconstructs the original data (in the sense of mean squared error) and that this representation actually corresponds to the first principal component of the data. Thus we can use PCA as a simple and effective dimensionality reduction method that preserves as much of the information in the data as possible (again, as measured by least-squares reconstruction error). In the following, we will study how the PCA representation decorrelates the original data representation . X Let us consider the m n × -dimensional design matrix X. We will assume that the data has a mean of zero, E[x] = 0. If this is not the case, the data can easily be centered by subtracting the mean from all examples in a preprocessing step. The unbiased sample covariance matrix associated with is given by: X Var[ ] = x 1 m − 1 X  X. (5.85) 148
  • 165. CHAPTER 5. MACHINE LEARNING BASICS PCA finds a representation (through linear transformation) z = x W where Var[ ] z is diagonal. In section , we saw that the principal components of a design matrix 2.12 X are given by the eigenvectors of XX. From this view, X X W W = Λ  . (5.86) In this section, we exploit an alternative derivation of the principal components. The principal components may also be obtained via the singular value decomposition. Specifically, they are the right singular vectors of X . To see this, let W be the right singular vectors in the decomposition X = U W Σ . We then recover the original eigenvector equation with as the eigenvector basis: W X  X =  U W Σ   U W Σ  = WΣ2 W . (5.87) The SVD is helpful to show that PCA results in a diagonal Var[z]. Using the SVD of , we can express the variance of as: X X Var[ ] = x 1 m − 1 X X (5.88) = 1 m − 1 (U W Σ  ) U W Σ  (5.89) = 1 m − 1 WΣ U  U W Σ  (5.90) = 1 m − 1 WΣ2 W , (5.91) where we use the fact that U U = I because the U matrix of the singular value decomposition is defined to be orthogonal. This shows that if we take z = x W, we can ensure that the covariance of is diagonal as required: z Var[ ] = z 1 m − 1 Z Z (5.92) = 1 m − 1 W X  XW (5.93) = 1 m − 1 W WΣ2 W W (5.94) = 1 m − 1 Σ2 , (5.95) where this time we use the fact that W  W = I, again from the definition of the SVD. 149
  • 166. CHAPTER 5. MACHINE LEARNING BASICS The above analysis shows that when we project the data x to z, via the linear transformation W, the resulting representation has a diagonal covariance matrix (as given by Σ2 ) which immediately implies that the individual elements of z are mutually uncorrelated. This ability of PCA to transform data into a representation where the elements are mutually uncorrelated is a very important property of PCA. It is a simple example of a representation that attempts to disentangle the unknown factors of variation underlying the data. In the case of PCA, this disentangling takes the form of finding a rotation of the input space (described by W) that aligns the principal axes of variance with the basis of the new representation space associated with . z While correlation is an important category of dependency between elements of the data, we are also interested in learning representations that disentangle more complicated forms of feature dependencies. For this, we will need more than what can be done with a simple linear transformation. 5.8.2 -means Clustering k Another example of a simple representation learning algorithm isk-means clustering. The k-means clustering algorithm divides the training set intok different clusters of examples that are near each other. We can thus think of the algorithm as providing a k-dimensional one-hot code vector h representing an input x. If x belongs to cluster i, then hi = 1 and all other entries of the representation h are zero. The one-hot code provided by k-means clustering is an example of a sparse representation, because the majority of its entries are zero for every input. Later, we will develop other algorithms that learn more flexible sparse representations, where more than one entry can be non-zero for each input x. One-hot codes are an extreme example of sparse representations that lose many of the benefits of a distributed representation. The one-hot code still confers some statistical advantages (it naturally conveys the idea that all examples in the same cluster are similar to each other) and it confers the computational advantage that the entire representation may be captured by a single integer. The k-means algorithm works by initializingk different centroids {µ(1), . . . , µ( ) k } to different values, then alternating between two different steps until convergence. In one step, each training example is assigned to cluster i, where i is the index of the nearest centroid µ( ) i . In the other step, each centroid µ( ) i is updated to the mean of all training examples x( ) j assigned to cluster . i 150
  • 167. CHAPTER 5. MACHINE LEARNING BASICS One difficulty pertaining to clustering is that the clustering problem is inherently ill-posed, in the sense that there is no single criterion that measures how well a clustering of the data corresponds to the real world. We can measure properties of the clustering such as the average Euclidean distance from a cluster centroid to the members of the cluster. This allows us to tell how well we are able to reconstruct the training data from the cluster assignments. We do not know how well the cluster assignments correspond to properties of the real world. Moreover, there may be many different clusterings that all correspond well to some property of the real world. We may hope to find a clustering that relates to one feature but obtain a different, equally valid clustering that is not relevant to our task. For example, suppose that we run two clustering algorithms on a dataset consisting of images of red trucks, images of red cars, images of gray trucks, and images of gray cars. If we ask each clustering algorithm to find two clusters, one algorithm may find a cluster of cars and a cluster of trucks, while another may find a cluster of red vehicles and a cluster of gray vehicles. Suppose we also run a third clustering algorithm, which is allowed to determine the number of clusters. This may assign the examples to four clusters, red cars, red trucks, gray cars, and gray trucks. This new clustering now at least captures information about both attributes, but it has lost information about similarity. Red cars are in a different cluster from gray cars, just as they are in a different cluster from gray trucks. The output of the clustering algorithm does not tell us that red cars are more similar to gray cars than they are to gray trucks. They are different from both things, and that is all we know. These issues illustrate some of the reasons that we may prefer a distributed representation to a one-hot representation. A distributed representation could have two attributes for each vehicle—one representing its color and one representing whether it is a car or a truck. It is still not entirely clear what the optimal distributed representation is (how can the learning algorithm know whether the two attributes we are interested in are color and car-versus-truck rather than manufacturer and age?) but having many attributes reduces the burden on the algorithm to guess which single attribute we care about, and allows us to measure similarity between objects in a fine-grained way by comparing many attributes instead of just testing whether one attribute matches. 5.9 Stochastic Gradient Descent Nearly all of deep learning is powered by one very important algorithm: stochastic gradient descent or SGD. Stochastic gradient descent is an extension of the 151
  • 168. CHAPTER 5. MACHINE LEARNING BASICS gradient descent algorithm introduced in section . 4.3 A recurring problem in machine learning is that large training sets are necessary for good generalization, but large training sets are also more computationally expensive. The cost function used by a machine learning algorithm often decomposes as a sum over training examples of some per-example loss function. For example, the negative conditional log-likelihood of the training data can be written as J( ) = θ Ex,y∼p̂data L , y, (x θ) = 1 m m  i=1 L(x( ) i , y( ) i , θ) (5.96) where is the per-example loss L L , y, p y . (x θ) = log − ( | x θ ; ) For these additive cost functions, gradient descent requires computing ∇θ J( ) = θ 1 m m  i=1 ∇θL(x( ) i , y( ) i , . θ) (5.97) The computational cost of this operation is O(m). As the training set size grows to billions of examples, the time to take a single gradient step becomes prohibitively long. The insight of stochastic gradient descent is that the gradient is an expectation. The expectation may be approximately estimated using a small set of samples. Specifically, on each step of the algorithm, we can sample a minibatch of examples B = {x(1), . . . , x(m)} drawn uniformly from the training set. The minibatch size m is typically chosen to be a relatively small number of examples, ranging from 1 to a few hundred. Crucially, m is usually held fixed as the training set size m grows. We may fit a training set with billions of examples using updates computed on only a hundred examples. The estimate of the gradient is formed as g = 1 m ∇θ m  i=1 L(x( ) i , y( ) i , . θ) (5.98) using examples from the minibatch . The stochastic gradient descent algorithm B then follows the estimated gradient downhill: θ θ g ← −  , (5.99) where is the learning rate.  152
  • 169. CHAPTER 5. MACHINE LEARNING BASICS Gradient descent in general has often been regarded as slow or unreliable. In the past, the application of gradient descent to non-convex optimization problems was regarded as foolhardy or unprincipled. Today, we know that the machine learning models described in part work very well when trained with gradient II descent. The optimization algorithm may not be guaranteed to arrive at even a local minimum in a reasonable amount of time, but it often finds a very low value of the cost function quickly enough to be useful. Stochastic gradient descent has many important uses outside the context of deep learning. It is the main way to train large linear models on very large datasets. For a fixed model size, the cost per SGD update does not depend on the training set size m. In practice, we often use a larger model as the training set size increases, but we are not forced to do so. The number of updates required to reach convergence usually increases with training set size. However, as m approaches infinity, the model will eventually converge to its best possible test error before SGD has sampled every example in the training set. Increasing m further will not extend the amount of training time needed to reach the model’s best possible test error. From this point of view, one can argue that the asymptotic cost of training a model with SGD is as a function of . O(1) m Prior to the advent of deep learning, the main way to learn nonlinear models was to use the kernel trick in combination with a linear model. Many kernel learning algorithms require constructing an m m × matrix Gi,j = k(x( ) i , x( ) j ). Constructing this matrix has computational cost O(m2), which is clearly undesirable for datasets with billions of examples. In academia, starting in 2006, deep learning was initially interesting because it was able to generalize to new examples better than competing algorithms when trained on medium-sized datasets with tens of thousands of examples. Soon after, deep learning garnered additional interest in industry, because it provided a scalable way of training nonlinear models on large datasets. Stochastic gradient descent and many enhancements to it are described further in chapter . 8 5.10 Building a Machine Learning Algorithm Nearly all deep learning algorithms can be described as particular instances of a fairly simple recipe: combine a specification of a dataset, a cost function, an optimization procedure and a model. For example, the linear regression algorithm combines a dataset consisting of 153
  • 170. CHAPTER 5. MACHINE LEARNING BASICS X y and , the cost function J , b (w ) = −Ex,y∼p̂data log pmodel( ) y | x , (5.100) the model specification pmodel(y | x) = N(y;xw + b, 1), and, in most cases, the optimization algorithm defined by solving for where the gradient of the cost is zero using the normal equations. By realizing that we can replace any of these components mostly independently from the others, we can obtain a very wide variety of algorithms. The cost function typically includes at least one term that causes the learning process to perform statistical estimation. The most common cost function is the negative log-likelihood, so that minimizing the cost function causes maximum likelihood estimation. The cost function may also include additional terms, such as regularization terms. For example, we can add weight decay to the linear regression cost function to obtain J , b λ (w ) = || || w 2 2 − Ex,y∼p̂data log pmodel( ) y | x . (5.101) This still allows closed-form optimization. If we change the model to be nonlinear, then most cost functions can no longer be optimized in closed form. This requires us to choose an iterative numerical optimization procedure, such as gradient descent. The recipe for constructing a learning algorithm by combining models, costs, and optimization algorithms supports both supervised and unsupervised learning. The linear regression example shows how to support supervised learning. Unsupervised learning can be supported by defining a dataset that contains only X and providing an appropriate unsupervised cost and model. For example, we can obtain the first PCA vector by specifying that our loss function is J( ) = w Ex∼p̂data || − || x r( ; ) x w 2 2 (5.102) while our model is defined to have w with norm one and reconstruction function r( ) = x wxw. In some cases, the cost function may be a function that we cannot actually evaluate, for computational reasons. In these cases, we can still approximately minimize it using iterative numerical optimization so long as we have some way of approximating its gradients. Most machine learning algorithms make use of this recipe, though it may not immediately be obvious. If a machine learning algorithm seems especially unique or 154
  • 171. CHAPTER 5. MACHINE LEARNING BASICS hand-designed, it can usually be understood as using a special-case optimizer. Some models such as decision trees or k-means require special-case optimizers because their cost functions have flat regions that make them inappropriate for minimization by gradient-based optimizers. Recognizing that most machine learning algorithms can be described using this recipe helps to see the different algorithms as part of a taxonomy of methods for doing related tasks that work for similar reasons, rather than as a long list of algorithms that each have separate justifications. 5.11 Challenges Motivating Deep Learning The simple machine learning algorithms described in this chapter work very well on a wide variety of important problems. However, they have not succeeded in solving the central problems in AI, such as recognizing speech or recognizing objects. The development of deep learning was motivated in part by the failure of traditional algorithms to generalize well on such AI tasks. This section is about how the challenge of generalizing to new examples becomes exponentially more difficult when working with high-dimensional data, and how the mechanisms used to achieve generalization in traditional machine learning are insufficient to learn complicated functions in high-dimensional spaces. Such spaces also often impose high computational costs. Deep learning was designed to overcome these and other obstacles. 5.11.1 The Curse of Dimensionality Many machine learning problems become exceedingly difficult when the number of dimensions in the data is high. This phenomenon is known as the curse of dimensionality. Of particular concern is that the number of possible distinct configurations of a set of variables increases exponentially as the number of variables increases. 155
  • 172. CHAPTER 5. MACHINE LEARNING BASICS Figure 5.9: As the number of relevant dimensions of the data increases (from left to right), the number of configurations of interest may grow exponentially. (Left)In this one-dimensional example, we have one variable for which we only care to distinguish 10 regions of interest. With enough examples falling within each of these regions (each region corresponds to a cell in the illustration), learning algorithms can easily generalize correctly. A straightforward way to generalize is to estimate the value of the target function within each region (and possibly interpolate between neighboring regions). With 2 (Center) dimensions it is more difficult to distinguish 10 different values of each variable. We need to keep track of up to 10×10=100 regions, and we need at least that many examples to cover all those regions. With 3 dimensions this grows to (Right) 10 3 = 1000 regions and at least that many examples. For d dimensions and v values to be distinguished along each axis, we seem to need O(vd ) regions and examples. This is an instance of the curse of dimensionality. Figure graciously provided by Nicolas Chapados. The curse of dimensionality arises in many places in computer science, and especially so in machine learning. One challenge posed by the curse of dimensionality is a statistical challenge. As illustrated in figure , a statistical challenge arises because the number of 5.9 possible configurations of x is much larger than the number of training examples. To understand the issue, let us consider that the input space is organized into a grid, like in the figure. We can describe low-dimensional space with a low number of grid cells that are mostly occupied by the data. When generalizing to a new data point, we can usually tell what to do simply by inspecting the training examples that lie in the same cell as the new input. For example, if estimating the probability density at some point x, we can just return the number of training examples in the same unit volume cell as x, divided by the total number of training examples. If we wish to classify an example, we can return the most common class of training examples in the same cell. If we are doing regression we can average the target values observed over the examples in that cell. But what about the cells for which we have seen no example? Because in high-dimensional spaces the number of configurations is huge, much larger than our number of examples, a typical grid cell has no training example associated with it. How could we possibly say something 156
  • 173. CHAPTER 5. MACHINE LEARNING BASICS meaningful about these new configurations? Many traditional machine learning algorithms simply assume that the output at a new point should be approximately the same as the output at the nearest training point. 5.11.2 Local Constancy and Smoothness Regularization In order to generalize well, machine learning algorithms need to be guided by prior beliefs about what kind of function they should learn. Previously, we have seen these priors incorporated as explicit beliefs in the form of probability distributions over parameters of the model. More informally, we may also discuss prior beliefs as directly influencing the itself and only indirectly acting on the parameters function via their effect on the function. Additionally, we informally discuss prior beliefs as being expressed implicitly, by choosing algorithms that are biased toward choosing some class of functions over another, even though these biases may not be expressed (or even possible to express) in terms of a probability distribution representing our degree of belief in various functions. Among the most widely used of these implicit “priors” is the smoothness prior or local constancy prior. This prior states that the function we learn should not change very much within a small region. Many simpler algorithms rely exclusively on this prior to generalize well, and as a result they fail to scale to the statistical challenges involved in solving AI- level tasks. Throughout this book, we will describe how deep learning introduces additional (explicit and implicit) priors in order to reduce the generalization error on sophisticated tasks. Here, we explain why the smoothness prior alone is insufficient for these tasks. There are many different ways to implicitly or explicitly express a prior belief that the learned function should be smooth or locally constant. All of these different methods are designed to encourage the learning process to learn a function f∗ that satisfies the condition f∗ ( ) x ≈ f ∗ ( + ) x  (5.103) for most configurations x and small change . In other words, if we know a good answer for an input x (for example, if x is a labeled training example) then that answer is probably good in the neighborhood of x. If we have several good answers in some neighborhood we would combine them (by some form of averaging or interpolation) to produce an answer that agrees with as many of them as much as possible. An extreme example of the local constancy approach is the k-nearest neighbors family of learning algorithms. These predictors are literally constant over each 157
  • 174. CHAPTER 5. MACHINE LEARNING BASICS region containing all the points x that have the same set of k nearest neighbors in the training set. For k = 1, the number of distinguishable regions cannot be more than the number of training examples. While the k-nearest neighbors algorithm copies the output from nearby training examples, most kernel machines interpolate between training set outputs associated with nearby training examples. An important class of kernels is the family oflocal kernels where k(u v , ) is large when u = v and decreases as u and v grow farther apart from each other. A local kernel can be thought of as a similarity function that performs template matching, by measuring how closely a test example x resembles each training example x( ) i . Much of the modern motivation for deep learning is derived from studying the limitations of local template matching and how deep models are able to succeed in cases where local template matching fails ( , ). Bengio et al. 2006b Decision trees also suffer from the limitations of exclusively smoothness-based learning because they break the input space into as many regions as there are leaves and use a separate parameter (or sometimes many parameters for extensions of decision trees) in each region. If the target function requires a tree with at least n leaves to be represented accurately, then at least n training examples are required to fit the tree. A multiple of n is needed to achieve some level of statistical confidence in the predicted output. In general, to distinguish O(k) regions in input space, all of these methods require O(k) examples. Typically there are O(k) parameters, with O(1) parameters associated with each of the O(k) regions. The case of a nearest neighbor scenario, where each training example can be used to define at most one region, is illustrated in figure . 5.10 Is there a way to represent a complex function that has many more regions to be distinguished than the number of training examples? Clearly, assuming only smoothness of the underlying function will not allow a learner to do that. For example, imagine that the target function is a kind of checkerboard. A checkerboard contains many variations but there is a simple structure to them. Imagine what happens when the number of training examples is substantially smaller than the number of black and white squares on the checkerboard. Based on only local generalization and the smoothness or local constancy prior, we would be guaranteed to correctly guess the color of a new point if it lies within the same checkerboard square as a training example. There is no guarantee that the learner could correctly extend the checkerboard pattern to points lying in squares that do not contain training examples. With this prior alone, the only information that an example tells us is the color of its square, and the only way to get the colors of the 158
  • 175. CHAPTER 5. MACHINE LEARNING BASICS Figure 5.10: Illustration of how the nearest neighbor algorithm breaks up the input space into regions. An example (represented here by a circle) within each region defines the region boundary (represented here by the lines). They value associated with each example defines what the output should be for all points within the corresponding region. The regions defined by nearest neighbor matching form a geometric pattern called a Voronoi diagram. The number of these contiguous regions cannot grow faster than the number of training examples. While this figure illustrates the behavior of the nearest neighbor algorithm specifically, other machine learning algorithms that rely exclusively on the local smoothness prior for generalization exhibit similar behaviors: each training example only informs the learner about how to generalize in some neighborhood immediately surrounding that example. 159
  • 176. CHAPTER 5. MACHINE LEARNING BASICS entire checkerboard right is to cover each of its cells with at least one example. The smoothness assumption and the associated non-parametric learning algo- rithms work extremely well so long as there are enough examples for the learning algorithm to observe high points on most peaks and low points on most valleys of the true underlying function to be learned. This is generally true when the function to be learned is smooth enough and varies in few enough dimensions. In high dimensions, even a very smooth function can change smoothly but in a different way along each dimension. If the function additionally behaves differently in different regions, it can become extremely complicated to describe with a set of training examples. If the function is complicated (we want to distinguish a huge number of regions compared to the number of examples), is there any hope to generalize well? The answer to both of these questions—whether it is possible to represent a complicated function efficiently, and whether it is possible for the estimated function to generalize well to new inputs—is yes. The key insight is that a very large number of regions, e.g., O(2k ), can be defined with O(k) examples, so long as we introduce some dependencies between the regions via additional assumptions about the underlying data generating distribution. In this way, we can actually generalize non-locally ( , ; , ). Many Bengio and Monperrus 2005 Bengio et al. 2006c different deep learning algorithms provide implicit or explicit assumptions that are reasonable for a broad range of AI tasks in order to capture these advantages. Other approaches to machine learning often make stronger, task-specific as- sumptions. For example, we could easily solve the checkerboard task by providing the assumption that the target function is periodic. Usually we do not include such strong, task-specific assumptions into neural networks so that they can generalize to a much wider variety of structures. AI tasks have structure that is much too complex to be limited to simple, manually specified properties such as periodicity, so we want learning algorithms that embody more general-purpose assumptions. The core idea in deep learning is that we assume that the data was generated by the composition of factors or features, potentially at multiple levels in a hierarchy. Many other similarly generic assumptions can further improve deep learning al- gorithms. These apparently mild assumptions allow an exponential gain in the relationship between the number of examples and the number of regions that can be distinguished. These exponential gains are described more precisely in sections 6.4.1 15.4 15.5 , and . The exponential advantages conferred by the use of deep, distributed representations counter the exponential challenges posed by the curse of dimensionality. 160
  • 177. CHAPTER 5. MACHINE LEARNING BASICS 5.11.3 Manifold Learning An important concept underlying many ideas in machine learning is that of a manifold. A manifold is a connected region. Mathematically, it is a set of points, associated with a neighborhood around each point. From any given point, the manifold locally appears to be a Euclidean space. In everyday life, we experience the surface of the world as a 2-D plane, but it is in fact a spherical manifold in 3-D space. The definition of a neighborhood surrounding each point implies the existence of transformations that can be applied to move on the manifold from one position to a neighboring one. In the example of the world’s surface as a manifold, one can walk north, south, east, or west. Although there is a formal mathematical meaning to the term “manifold,” in machine learning it tends to be used more loosely to designate a connected set of points that can be approximated well by considering only a small number of degrees of freedom, or dimensions, embedded in a higher-dimensional space. Each dimension corresponds to a local direction of variation. See figure for an 5.11 example of training data lying near a one-dimensional manifold embedded in two- dimensional space. In the context of machine learning, we allow the dimensionality of the manifold to vary from one point to another. This often happens when a manifold intersects itself. For example, a figure eight is a manifold that has a single dimension in most places but two dimensions at the intersection at the center. 0 5 1 0 1 5 2 0 2 5 3 0 3 5 4 0 . . . . . . . . −1 0 . −0 5 . 0 0 . 0 5 . 1 0 . 1 5 . 2 0 . 2 5 . Figure 5.11: Data sampled from a distribution in a two-dimensional space that is actually concentrated near a one-dimensional manifold, like a twisted string. The solid line indicates the underlying manifold that the learner should infer. 161
  • 178. CHAPTER 5. MACHINE LEARNING BASICS Many machine learning problems seem hopeless if we expect the machine learning algorithm to learn functions with interesting variations across all of Rn . Manifold learning algorithms surmount this obstacle by assuming that most of Rn consists of invalid inputs, and that interesting inputs occur only along a collection of manifolds containing a small subset of points, with interesting variations in the output of the learned function occurring only along directions that lie on the manifold, or with interesting variations happening only when we move from one manifold to another. Manifold learning was introduced in the case of continuous-valued data and the unsupervised learning setting, although this probability concentration idea can be generalized to both discrete data and the supervised learning setting: the key assumption remains that probability mass is highly concentrated. The assumption that the data lies along a low-dimensional manifold may not always be correct or useful. We argue that in the context of AI tasks, such as those that involve processing images, sounds, or text, the manifold assumption is at least approximately correct. The evidence in favor of this assumption consists of two categories of observations. The first observation in favor of the manifold hypothesis is that the proba- bility distribution over images, text strings, and sounds that occur in real life is highly concentrated. Uniform noise essentially never resembles structured inputs from these domains. Figure shows how, instead, uniformly sampled points 5.12 look like the patterns of static that appear on analog television sets when no signal is available. Similarly, if you generate a document by picking letters uniformly at random, what is the probability that you will get a meaningful English-language text? Almost zero, again, because most of the long sequences of letters do not correspond to a natural language sequence: the distribution of natural language sequences occupies a very small volume in the total space of sequences of letters. 162
  • 179. CHAPTER 5. MACHINE LEARNING BASICS Figure 5.12: Sampling images uniformly at random (by randomly picking each pixel according to a uniform distribution) gives rise to noisy images. Although there is a non- zero probability to generate an image of a face or any other object frequently encountered in AI applications, we never actually observe this happening in practice. This suggests that the images encountered in AI applications occupy a negligible proportion of the volume of image space. Of course, concentrated probability distributions are not sufficient to show that the data lies on a reasonably small number of manifolds. We must also establish that the examples we encounter are connected to each other by other 163
  • 180. CHAPTER 5. MACHINE LEARNING BASICS examples, with each example surrounded by other highly similar examples that may be reached by applying transformations to traverse the manifold. The second argument in favor of the manifold hypothesis is that we can also imagine such neighborhoods and transformations, at least informally. In the case of images, we can certainly think of many possible transformations that allow us to trace out a manifold in image space: we can gradually dim or brighten the lights, gradually move or rotate objects in the image, gradually alter the colors on the surfaces of objects, etc. It remains likely that there are multiple manifolds involved in most applications. For example, the manifold of images of human faces may not be connected to the manifold of images of cat faces. These thought experiments supporting the manifold hypotheses convey some in- tuitive reasons supporting it. More rigorous experiments (Cayton 2005 Narayanan , ; and Mitter 2010 Schölkopf 1998 Roweis and Saul 2000 Tenenbaum , ; et al., ; , ; et al., 2000 Brand 2003 Belkin and Niyogi 2003 Donoho and Grimes 2003 Weinberger ; , ; , ; , ; and Saul 2004 , ) clearly support the hypothesis for a large class of datasets of interest in AI. When the data lies on a low-dimensional manifold, it can be most natural for machine learning algorithms to represent the data in terms of coordinates on the manifold, rather than in terms of coordinates in Rn . In everyday life, we can think of roads as 1-D manifolds embedded in 3-D space. We give directions to specific addresses in terms of address numbers along these 1-D roads, not in terms of coordinates in 3-D space. Extracting these manifold coordinates is challenging, but holds the promise to improve many machine learning algorithms. This general principle is applied in many contexts. Figure shows the manifold structure of 5.13 a dataset consisting of faces. By the end of this book, we will have developed the methods necessary to learn such a manifold structure. In figure , we will see 20.6 how a machine learning algorithm can successfully accomplish this goal. This concludes part , which has provided the basic concepts in mathematics I and machine learning which are employed throughout the remaining parts of the book. You are now prepared to embark upon your study of deep learning. 164
  • 181. CHAPTER 5. MACHINE LEARNING BASICS Figure 5.13: Training examples from the QMUL Multiview Face Dataset ( , ) Gong et al. 2000 for which the subjects were asked to move in such a way as to cover the two-dimensional manifold corresponding to two angles of rotation. We would like learning algorithms to be able to discover and disentangle such manifold coordinates. Figure illustrates such a 20.6 feat. 165
  • 182. Part II Deep Networks: Modern Practices 166
  • 183. This part of the book summarizes the state of modern deep learning as it is used to solve practical applications. Deep learning has a long history and many aspirations. Several approaches have been proposed that have yet to entirely bear fruit. Several ambitious goals have yet to be realized. These less-developed branches of deep learning appear in the final part of the book. This part focuses only on those approaches that are essentially working tech- nologies that are already used heavily in industry. Modern deep learning provides a very powerful framework for supervised learning. By adding more layers and more units within a layer, a deep network can represent functions of increasing complexity. Most tasks that consist of mapping an input vector to an output vector, and that are easy for a person to do rapidly, can be accomplished via deep learning, given sufficiently large models and sufficiently large datasets of labeled training examples. Other tasks, that can not be described as associating one vector to another, or that are difficult enough that a person would require time to think and reflect in order to accomplish the task, remain beyond the scope of deep learning for now. This part of the book describes the core parametric function approximation technology that is behind nearly all modern practical applications of deep learning. We begin by describing the feedforward deep network model that is used to represent these functions. Next, we present advanced techniques for regularization and optimization of such models. Scaling these models to large inputs such as high resolution images or long temporal sequences requires specialization. We introduce the convolutional network for scaling to large images and the recurrent neural network for processing temporal sequences. Finally, we present general guidelines for the practical methodology involved in designing, building, and configuring an application involving deep learning, and review some of the applications of deep learning. These chapters are the most important for a practitioner—someone who wants to begin implementing and using deep learning algorithms to solve real-world problems today. 167
  • 184. Chapter 6 Deep Feedforward Networks Deep feedforward networks, also often called feedforward neural networks, or multilayer perceptrons (MLPs), are the quintessential deep learning models. The goal of a feedforward network is to approximate some functionf∗ . For example, for a classifier, y = f∗ (x) maps an input x to a category y. A feedforward network defines a mapping y = f(x;θ) and learns the value of the parameters θ that result in the best function approximation. These models are called feedforward because information flows through the function being evaluated from x, through the intermediate computations used to define f, and finally to the output y. There are no feedback connections in which outputs of the model are fed back into itself. When feedforward neural networks are extended to include feedback connections, they are called recurrent neural networks, presented in chapter . 10 Feedforward networks are of extreme importance to machine learning practi- tioners. They form the basis of many important commercial applications. For example, the convolutional networks used for object recognition from photos are a specialized kind of feedforward network. Feedforward networks are a conceptual stepping stone on the path to recurrent networks, which power many natural language applications. Feedforward neural networks are called networks because they are typically represented by composing together many different functions. The model is asso- ciated with a directed acyclic graph describing how the functions are composed together. For example, we might have three functions f(1), f(2), and f(3) connected in a chain, to form f(x) = f(3)(f(2)(f(1)(x))). These chain structures are the most commonly used structures of neural networks. In this case, f(1) is called the first layer of the network, f(2) is called the second layer, and so on. The overall 168
  • 185. CHAPTER 6. DEEP FEEDFORWARD NETWORKS length of the chain gives the depth of the model. It is from this terminology that the name “deep learning” arises. The final layer of a feedforward network is called the output layer. During neural network training, we drive f(x) to match f∗ (x). The training data provides us with noisy, approximate examples of f∗ (x) evaluated at different training points. Each example x is accompanied by a label y f ≈ ∗ (x). The training examples specify directly what the output layer must do at each point x; it must produce a value that is close to y. The behavior of the other layers is not directly specified by the training data. The learning algorithm must decide how to use those layers to produce the desired output, but the training data does not say what each individual layer should do. Instead, the learning algorithm must decide how to use these layers to best implement an approximation of f∗. Because the training data does not show the desired output for each of these layers, these layers are called hidden layers. Finally, these networks are called neural because they are loosely inspired by neuroscience. Each hidden layer of the network is typically vector-valued. The dimensionality of these hidden layers determines the width of the model. Each element of the vector may be interpreted as playing a role analogous to a neuron. Rather than thinking of the layer as representing a single vector-to-vector function, we can also think of the layer as consisting of many units that act in parallel, each representing a vector-to-scalar function. Each unit resembles a neuron in the sense that it receives input from many other units and computes its own activation value. The idea of using many layers of vector-valued representation is drawn from neuroscience. The choice of the functions f( ) i (x) used to compute these representations is also loosely guided by neuroscientific observations about the functions that biological neurons compute. However, modern neural network research is guided by many mathematical and engineering disciplines, and the goal of neural networks is not to perfectly model the brain. It is best to think of feedforward networks as function approximation machines that are designed to achieve statistical generalization, occasionally drawing some insights from what we know about the brain, rather than as models of brain function. One way to understand feedforward networks is to begin with linear models and consider how to overcome their limitations. Linear models, such as logistic regression and linear regression, are appealing because they may be fit efficiently and reliably, either in closed form or with convex optimization. Linear models also have the obvious defect that the model capacity is limited to linear functions, so the model cannot understand the interaction between any two input variables. To extend linear models to represent nonlinear functions of x, we can apply the linear model not to x itself but to a transformed input φ(x), where φ is a 169
  • 186. CHAPTER 6. DEEP FEEDFORWARD NETWORKS nonlinear transformation. Equivalently, we can apply the kernel trick described in section , to obtain a nonlinear learning algorithm based on implicitly applying 5.7.2 the φ mapping. We can think of φ as providing a set of features describing x, or as providing a new representation for . x The question is then how to choose the mapping . φ 1. One option is to use a very generic φ, such as the infinite-dimensional φ that is implicitly used by kernel machines based on the RBF kernel. If φ(x) is of high enough dimension, we can always have enough capacity to fit the training set, but generalization to the test set often remains poor. Very generic feature mappings are usually based only on the principle of local smoothness and do not encode enough prior information to solve advanced problems. 2. Another option is to manually engineer φ. Until the advent of deep learning, this was the dominant approach. This approach requires decades of human effort for each separate task, with practitioners specializing in different domains such as speech recognition or computer vision, and with little transfer between domains. 3. The strategy of deep learning is to learn φ. In this approach, we have a model y = f(x;θ w , ) = φ(x; θ)w. We now have parameters θ that we use to learn φ from a broad class of functions, and parameters w that map from φ(x) to the desired output. This is an example of a deep feedforward network, with φ defining a hidden layer. This approach is the only one of the three that gives up on the convexity of the training problem, but the benefits outweigh the harms. In this approach, we parametrize the representation as φ(x; θ) and use the optimization algorithm to find the θ that corresponds to a good representation. If we wish, this approach can capture the benefit of the first approach by being highly generic—we do so by using a very broad family φ(x;θ). This approach can also capture the benefit of the second approach. Human practitioners can encode their knowledge to help generalization by designing families φ(x; θ) that they expect will perform well. The advantage is that the human designer only needs to find the right general function family rather than finding precisely the right function. This general principle of improving models by learning features extends beyond the feedforward networks described in this chapter. It is a recurring theme of deep learning that applies to all of the kinds of models described throughout this book. Feedforward networks are the application of this principle to learning deterministic 170
  • 187. CHAPTER 6. DEEP FEEDFORWARD NETWORKS mappings from x to y that lack feedback connections. Other models presented later will apply these principles to learning stochastic mappings, learning functions with feedback, and learning probability distributions over a single vector. We begin this chapter with a simple example of a feedforward network. Next, we address each of the design decisions needed to deploy a feedforward network. First, training a feedforward network requires making many of the same design decisions as are necessary for a linear model: choosing the optimizer, the cost function, and the form of the output units. We review these basics of gradient-based learning, then proceed to confront some of the design decisions that are unique to feedforward networks. Feedforward networks have introduced the concept of a hidden layer, and this requires us to choose the activation functions that will be used to compute the hidden layer values. We must also design the architecture of the network, including how many layers the network should contain, how these layers should be connected to each other, and how many units should be in each layer. Learning in deep neural networks requires computing the gradients of complicated functions. We present the back-propagation algorithm and its modern generalizations, which can be used to efficiently compute these gradients. Finally, we close with some historical perspective. 6.1 Example: Learning XOR To make the idea of a feedforward network more concrete, we begin with an example of a fully functioning feedforward network on a very simple task: learning the XOR function. The XOR function (“exclusive or”) is an operation on two binary values, x1 and x2. When exactly one of these binary values is equal to , the XOR function 1 returns . Otherwise, it returns 0. The XOR function provides the target function 1 y = f∗ (x) that we want to learn. Our model provides a function y = f(x;θ) and our learning algorithm will adapt the parameters θ to make f as similar as possible to f∗ . In this simple example, we will not be concerned with statistical generalization. We want our network to perform correctly on the four points X = {[0, 0], [0,1], [1,0], and [1,1]}. We will train the network on all four of these points. The only challenge is to fit the training set. We can treat this problem as a regression problem and use a mean squared error loss function. We choose this loss function to simplify the math for this example as much as possible. In practical applications, MSE is usually not an 171
  • 188. CHAPTER 6. DEEP FEEDFORWARD NETWORKS appropriate cost function for modeling binary data. More appropriate approaches are described in section . 6.2.2.2 Evaluated on our whole training set, the MSE loss function is J( ) = θ 1 4  x∈X (f∗ ( ) ( ; )) x − f x θ 2 . (6.1) Now we must choose the form of our model, f(x;θ). Suppose that we choose a linear model, with consisting of and . Our model is defined to be θ w b f , b ( ; x w ) = x w + b. (6.2) We can minimize J(θ) in closed form with respect to w and b using the normal equations. After solving the normal equations, we obtain w = 0 and b = 1 2. The linear model simply outputs 0.5 everywhere. Why does this happen? Figure shows 6.1 how a linear model is not able to represent the XOR function. One way to solve this problem is to use a model that learns a different feature space in which a linear model is able to represent the solution. Specifically, we will introduce a very simple feedforward network with one hidden layer containing two hidden units. See figure for an illustration of 6.2 this model. This feedforward network has a vector of hidden units h that are computed by a function f(1) (x; W c , ). The values of these hidden units are then used as the input for a second layer. The second layer is the output layer of the network. The output layer is still just a linear regression model, but now it is applied to h rather than to x. The network now contains two functions chained together: h = f(1) (x;W c , ) and y = f(2) (h; w, b), with the complete model being f , , , b f ( ; x W c w ) = (2) (f (1) ( )) x . What function should f(1) compute? Linear models have served us well so far, and it may be tempting to make f(1) be linear as well. Unfortunately, if f(1) were linear, then the feedforward network as a whole would remain a linear function of its input. Ignoring the intercept terms for the moment, suppose f(1) (x) = W x and f(2) (h) = h w. Then f (x) = w W  x. We could represent this function as f( ) = x x w where w = W w. Clearly, we must use a nonlinear function to describe the features. Most neural networks do so using an affine transformation controlled by learned parameters, followed by a fixed, nonlinear function called an activation function. We use that strategy here, by defining h = g(W x + c), where W provides the weights of a linear transformation and c the biases. Previously, to describe a linear regression 172
  • 189. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 0 1 x1 0 1 x 2 Original space x 0 1 2 h1 0 1 h 2 Learned space h Figure 6.1: Solving the XOR problem by learning a representation. The bold numbers printed on the plot indicate the value that the learned function must output at each point. (Left)A linear model applied directly to the original input cannot implement the XOR function. When x1 = 0, the model’s output must increase as x2 increases. When x1 = 1, the model’s output must decrease as x2 increases. A linear model must apply a fixed coefficient w2 to x2. The linear model therefore cannot use the value of x1 to change the coefficient on x2 and cannot solve this problem. (Right)In the transformed space represented by the features extracted by a neural network, a linear model can now solve the problem. In our example solution, the two points that must have output have been 1 collapsed into a single point in feature space. In other words, the nonlinear features have mapped both x = [1, 0] and x = [0,1] to a single point in feature space, h = [1 ,0] . The linear model can now describe the function as increasing in h1 and decreasing in h2. In this example, the motivation for learning the feature space is only to make the model capacity greater so that it can fit the training set. In more realistic applications, learned representations can also help the model to generalize. 173
  • 190. CHAPTER 6. DEEP FEEDFORWARD NETWORKS y y h h x x W w y y h1 h1 x1 x1 h2 h2 x2 x2 Figure 6.2: An example of a feedforward network, drawn in two different styles. Specifically, this is the feedforward network we use to solve the XOR example. It has a single hidden layer containing two units. (Left)In this style, we draw every unit as a node in the graph. This style is very explicit and unambiguous but for networks larger than this example it can consume too much space. In this style, we draw a node in the graph for (Right) each entire vector representing a layer’s activations. This style is much more compact. Sometimes we annotate the edges in this graph with the name of the parameters that describe the relationship between two layers. Here, we indicate that a matrixW describes the mapping from x to h, and a vector w describes the mapping from h to y. We typically omit the intercept parameters associated with each layer when labeling this kind of drawing. model, we used a vector of weights and a scalar bias parameter to describe an affine transformation from an input vector to an output scalar. Now, we describe an affine transformation from a vector x to a vector h, so an entire vector of bias parameters is needed. The activation function g is typically chosen to be a function that is applied element-wise, with hi = g(xW:,i +ci). In modern neural networks, the default recommendation is to use the rectified linear unit or ReLU (Jarrett et al. et al. , ; , ; 2009 Nair and Hinton 2010 Glorot , ) defined by the activation 2011a function depicted in figure . g z , z ( ) = max 0 { } 6.3 We can now specify our complete network as f , , , b ( ; x W c w ) = w max 0 { , W  x c + } + b. (6.3) We can now specify a solution to the XOR problem. Let W =  1 1 1 1  , (6.4) c =  0 −1  , (6.5) 174
  • 191. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 0 z 0 g z ( ) = max 0 { , z} Figure 6.3: The rectified linear activation function. This activation function is the default activation function recommended for use with most feedforward neural networks. Applying this function to the output of a linear transformation yields a nonlinear transformation. However, the function remains very close to linear, in the sense that is a piecewise linear function with two linear pieces. Because rectified linear units are nearly linear, they preserve many of the properties that make linear models easy to optimize with gradient- based methods. They also preserve many of the properties that make linear models generalize well. A common principle throughout computer science is that we can build complicated systems from minimal components. Much as a Turing machine’s memory needs only to be able to store 0 or 1 states, we can build a universal function approximator from rectified linear functions. 175
  • 192. CHAPTER 6. DEEP FEEDFORWARD NETWORKS w =  1 −2  , (6.6) and . b = 0 We can now walk through the way that the model processes a batch of inputs. Let X be the design matrix containing all four points in the binary input space, with one example per row: X =     0 0 0 1 1 0 1 1    . (6.7) The first step in the neural network is to multiply the input matrix by the first layer’s weight matrix: XW =     0 0 1 1 1 1 2 2     . (6.8) Next, we add the bias vector , to obtain c     0 1 − 1 0 1 0 2 1    . (6.9) In this space, all of the examples lie along a line with slope . As we move along 1 this line, the output needs to begin at , then rise to , then drop back down to . 0 1 0 A linear model cannot implement such a function. To finish computing the value of for each example, we apply the rectified linear transformation: h     0 0 1 0 1 0 2 1     . (6.10) This transformation has changed the relationship between the examples. They no longer lie on a single line. As shown in figure , they now lie in a space where a 6.1 linear model can solve the problem. We finish by multiplying by the weight vector : w     0 1 1 0    . (6.11) 176
  • 193. CHAPTER 6. DEEP FEEDFORWARD NETWORKS The neural network has obtained the correct answer for every example in the batch. In this example, we simply specified the solution, then showed that it obtained zero error. In a real situation, there might be billions of model parameters and billions of training examples, so one cannot simply guess the solution as we did here. Instead, a gradient-based optimization algorithm can find parameters that produce very little error. The solution we described to the XOR problem is at a global minimum of the loss function, so gradient descent could converge to this point. There are other equivalent solutions to the XOR problem that gradient descent could also find. The convergence point of gradient descent depends on the initial values of the parameters. In practice, gradient descent would usually not find clean, easily understood, integer-valued solutions like the one we presented here. 6.2 Gradient-Based Learning Designing and training a neural network is not much different from training any other machine learning model with gradient descent. In section , we described 5.10 how to build a machine learning algorithm by specifying an optimization procedure, a cost function, and a model family. The largest difference between the linear models we have seen so far and neural networks is that the nonlinearity of a neural network causes most interesting loss functions to become non-convex. This means that neural networks are usually trained by using iterative, gradient-based optimizers that merely drive the cost function to a very low value, rather than the linear equation solvers used to train linear regression models or the convex optimization algorithms with global conver- gence guarantees used to train logistic regression or SVMs. Convex optimization converges starting from any initial parameters (in theory—in practice it is very robust but can encounter numerical problems). Stochastic gradient descent applied to non-convex loss functions has no such convergence guarantee, and is sensitive to the values of the initial parameters. For feedforward neural networks, it is important to initialize all weights to small random values. The biases may be initialized to zero or to small positive values. The iterative gradient-based opti- mization algorithms used to train feedforward networks and almost all other deep models will be described in detail in chapter , with parameter initialization in 8 particular discussed in section . For the moment, it suffices to understand that 8.4 the training algorithm is almost always based on using the gradient to descend the cost function in one way or another. The specific algorithms are improvements and refinements on the ideas of gradient descent, introduced in section , and, 4.3 177
  • 194. CHAPTER 6. DEEP FEEDFORWARD NETWORKS more specifically, are most often improvements of the stochastic gradient descent algorithm, introduced in section . 5.9 We can of course, train models such as linear regression and support vector machines with gradient descent too, and in fact this is common when the training set is extremely large. From this point of view, training a neural network is not much different from training any other model. Computing the gradient is slightly more complicated for a neural network, but can still be done efficiently and exactly. Section will describe how to obtain the gradient using the back-propagation 6.5 algorithm and modern generalizations of the back-propagation algorithm. As with other machine learning models, to apply gradient-based learning we must choose a cost function, and we must choose how to represent the output of the model. We now revisit these design considerations with special emphasis on the neural networks scenario. 6.2.1 Cost Functions An important aspect of the design of a deep neural network is the choice of the cost function. Fortunately, the cost functions for neural networks are more or less the same as those for other parametric models, such as linear models. In most cases, our parametric model defines a distribution p(y x | ;θ ) and we simply use the principle of maximum likelihood. This means we use the cross-entropy between the training data and the model’s predictions as the cost function. Sometimes, we take a simpler approach, where rather than predicting a complete probability distribution over y, we merely predict some statistic of y conditioned on . Specialized loss functions allow us to train a predictor of these estimates. x The total cost function used to train a neural network will often combine one of the primary cost functions described here with a regularization term. We have already seen some simple examples of regularization applied to linear models in section . The weight decay approach used for linear models is also directly 5.2.2 applicable to deep neural networks and is among the most popular regularization strategies. More advanced regularization strategies for neural networks will be described in chapter . 7 6.2.1.1 Learning Conditional Distributions with Maximum Likelihood Most modern neural networks are trained using maximum likelihood. This means that the cost function is simply the negative log-likelihood, equivalently described 178
  • 195. CHAPTER 6. DEEP FEEDFORWARD NETWORKS as the cross-entropy between the training data and the model distribution. This cost function is given by J( ) = θ −Ex y , ∼p̂data log pmodel( ) y x | . (6.12) The specific form of the cost function changes from model to model, depending on the specific form of log pmodel. The expansion of the above equation typically yields some terms that do not depend on the model parameters and may be dis- carded. For example, as we saw in section , if 5.5.1 pmodel(y x | ) = N (y ;f(x; θ), I ), then we recover the mean squared error cost, J θ ( ) = 1 2 Ex y , ∼p̂data || − || y f( ; ) x θ 2 + const, (6.13) up to a scaling factor of 1 2 and a term that does not depend on . The discarded θ constant is based on the variance of the Gaussian distribution, which in this case we chose not to parametrize. Previously, we saw that the equivalence between maximum likelihood estimation with an output distribution and minimization of mean squared error holds for a linear model, but in fact, the equivalence holds regardless of the used to predict the mean of the Gaussian. f( ; ) x θ An advantage of this approach of deriving the cost function from maximum likelihood is that it removes the burden of designing cost functions for each model. Specifying a model p(y x | ) automatically determines a cost function log p(y x | ). One recurring theme throughout neural network design is that the gradient of the cost function must be large and predictable enough to serve as a good guide for the learning algorithm. Functions that saturate (become very flat) undermine this objective because they make the gradient become very small. In many cases this happens because the activation functions used to produce the output of the hidden units or the output units saturate. The negative log-likelihood helps to avoid this problem for many models. Many output units involve an exp function that can saturate when its argument is very negative. The log function in the negative log-likelihood cost function undoes the exp of some output units. We will discuss the interaction between the cost function and the choice of output unit in section . 6.2.2 One unusual property of the cross-entropy cost used to perform maximum likelihood estimation is that it usually does not have a minimum value when applied to the models commonly used in practice. For discrete output variables, most models are parametrized in such a way that they cannot represent a probability of zero or one, but can come arbitrarily close to doing so. Logistic regression is an example of such a model. For real-valued output variables, if the model 179
  • 196. CHAPTER 6. DEEP FEEDFORWARD NETWORKS can control the density of the output distribution (for example, by learning the variance parameter of a Gaussian output distribution) then it becomes possible to assign extremely high density to the correct training set outputs, resulting in cross-entropy approaching negative infinity. Regularization techniques described in chapter provide several different ways of modifying the learning problem so 7 that the model cannot reap unlimited reward in this way. 6.2.1.2 Learning Conditional Statistics Instead of learning a full probability distribution p(y x | ; θ) we often want to learn just one conditional statistic of given . y x For example, we may have a predictor f(x;θ) that we wish to predict the mean of . y If we use a sufficiently powerful neural network, we can think of the neural network as being able to represent any function f from a wide class of functions, with this class being limited only by features such as continuity and boundedness rather than by having a specific parametric form. From this point of view, we can view the cost function as being a functional rather than just a function. A functional is a mapping from functions to real numbers. We can thus think of learning as choosing a function rather than merely choosing a set of parameters. We can design our cost functional to have its minimum occur at some specific function we desire. For example, we can design the cost functional to have its minimum lie on the function that maps x to the expected value of y given x. Solving an optimization problem with respect to a function requires a mathematical tool called calculus of variations, described in section . It is not necessary 19.4.2 to understand calculus of variations to understand the content of this chapter. At the moment, it is only necessary to understand that calculus of variations may be used to derive the following two results. Our first result derived using calculus of variations is that solving the optimiza- tion problem f∗ = arg min f Ex y , ∼pdata || − || y f( ) x 2 (6.14) yields f∗ ( ) = x Ey∼pdata( ) y x | [ ] y , (6.15) so long as this function lies within the class we optimize over. In other words, if we could train on infinitely many samples from the true data generating distribution, minimizing the mean squared error cost function gives a function that predicts the mean of for each value of . y x 180
  • 197. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Different cost functions give different statistics. A second result derived using calculus of variations is that f∗ = arg min f Ex y , ∼pdata || − || y f( ) x 1 (6.16) yields a function that predicts the median value of y for each x, so long as such a function may be described by the family of functions we optimize over. This cost function is commonly called . mean absolute error Unfortunately, mean squared error and mean absolute error often lead to poor results when used with gradient-based optimization. Some output units that saturate produce very small gradients when combined with these cost functions. This is one reason that the cross-entropy cost function is more popular than mean squared error or mean absolute error, even when it is not necessary to estimate an entire distribution . p( ) y x | 6.2.2 Output Units The choice of cost function is tightly coupled with the choice of output unit. Most of the time, we simply use the cross-entropy between the data distribution and the model distribution. The choice of how to represent the output then determines the form of the cross-entropy function. Any kind of neural network unit that may be used as an output can also be used as a hidden unit. Here, we focus on the use of these units as outputs of the model, but in principle they can be used internally as well. We revisit these units with additional detail about their use as hidden units in section . 6.3 Throughout this section, we suppose that the feedforward network provides a set of hidden features defined by h = f(x;θ). The role of the output layer is then to provide some additional transformation from the features to complete the task that the network must perform. 6.2.2.1 Linear Units for Gaussian Output Distributions One simple kind of output unit is an output unit based on an affine transformation with no nonlinearity. These are often just called linear units. Given features h, a layer of linear output units produces a vector ŷ = W h+b. Linear output layers are often used to produce the mean of a conditional Gaussian distribution: p( ) = ( ; y x | N y ˆ y I , ). (6.17) 181
  • 198. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Maximizing the log-likelihood is then equivalent to minimizing the mean squared error. The maximum likelihood framework makes it straightforward to learn the covariance of the Gaussian too, or to make the covariance of the Gaussian be a function of the input. However, the covariance must be constrained to be a positive definite matrix for all inputs. It is difficult to satisfy such constraints with a linear output layer, so typically other output units are used to parametrize the covariance. Approaches to modeling the covariance are described shortly, in section . 6.2.2.4 Because linear units do not saturate, they pose little difficulty for gradient- based optimization algorithms and may be used with a wide variety of optimization algorithms. 6.2.2.2 Sigmoid Units for Bernoulli Output Distributions Many tasks require predicting the value of a binary variable y. Classification problems with two classes can be cast in this form. The maximum-likelihood approach is to define a Bernoulli distribution over y conditioned on . x A Bernoulli distribution is defined by just a single number. The neural net needs to predict only P(y = 1 | x). For this number to be a valid probability, it must lie in the interval [0, 1]. Satisfying this constraint requires some careful design effort. Suppose we were to use a linear unit, and threshold its value to obtain a valid probability: P y ( = 1 ) = max | x  0 min ,  1, w h + b  . (6.18) This would indeed define a valid conditional distribution, but we would not be able to train it very effectively with gradient descent. Any time that wh +b strayed outside the unit interval, the gradient of the output of the model with respect to its parameters would be 0. A gradient of 0 is typically problematic because the learning algorithm no longer has a guide for how to improve the corresponding parameters. Instead, it is better to use a different approach that ensures there is always a strong gradient whenever the model has the wrong answer. This approach is based on using sigmoid output units combined with maximum likelihood. A sigmoid output unit is defined by ŷ σ =  w h + b  (6.19) 182
  • 199. CHAPTER 6. DEEP FEEDFORWARD NETWORKS where is the logistic sigmoid function described in section . σ 3.10 We can think of the sigmoid output unit as having two components. First, it uses a linear layer to compute z = wh +b. Next, it uses the sigmoid activation function to convert into a probability. z We omit the dependence on x for the moment to discuss how to define a probability distribution over y using the value z. The sigmoid can be motivated by constructing an unnormalized probability distribution ˜ P(y), which does not sum to 1. We can then divide by an appropriate constant to obtain a valid probability distribution. If we begin with the assumption that the unnormalized log probabilities are linear in y and z, we can exponentiate to obtain the unnormalized probabilities. We then normalize to see that this yields a Bernoulli distribution controlled by a sigmoidal transformation of : z log ˜ P y yz ( ) = (6.20) ˜ P y yz ( ) = exp( ) (6.21) P y ( ) = exp( ) yz 1 y=0 exp(y z) (6.22) P y σ y z . ( ) = ((2 − 1) ) (6.23) Probability distributions based on exponentiation and normalization are common throughout the statistical modeling literature. The z variable defining such a distribution over binary variables is called a . logit This approach to predicting the probabilities in log-space is natural to use with maximum likelihood learning. Because the cost function used with maximum likelihood is − log P(y | x), the log in the cost function undoes the exp of the sigmoid. Without this effect, the saturation of the sigmoid could prevent gradient- based learning from making good progress. The loss function for maximum likelihood learning of a Bernoulli parametrized by a sigmoid is J P y ( ) = log θ − ( | x) (6.24) = log ((2 1) ) − σ y − z (6.25) = ((1 2 ) ) ζ − y z . (6.26) This derivation makes use of some properties from section . By rewriting 3.10 the loss in terms of the softplus function, we can see that it saturates only when (1 − 2y)z is very negative. Saturation thus occurs only when the model already has the right answer—when y = 1 and z is very positive, or y = 0 and z is very negative. When z has the wrong sign, the argument to the softplus function, 183
  • 200. CHAPTER 6. DEEP FEEDFORWARD NETWORKS (1 −2y)z, may be simplified to | | z . As | | z becomes large while z has the wrong sign, the softplus function asymptotes toward simply returning its argument | | z . The derivative with respect to z asymptotes to sign(z), so, in the limit of extremely incorrect z, the softplus function does not shrink the gradient at all. This property is very useful because it means that gradient-based learning can act to quickly correct a mistaken . z When we use other loss functions, such as mean squared error, the loss can saturate anytime σ(z) saturates. The sigmoid activation function saturates to 0 when z becomes very negative and saturates to when 1 z becomes very positive. The gradient can shrink too small to be useful for learning whenever this happens, whether the model has the correct answer or the incorrect answer. For this reason, maximum likelihood is almost always the preferred approach to training sigmoid output units. Analytically, the logarithm of the sigmoid is always defined and finite, because the sigmoid returns values restricted to the open interval (0, 1), rather than using the entire closed interval of valid probabilities [0,1]. In software implementations, to avoid numerical problems, it is best to write the negative log-likelihood as a function of z, rather than as a function of ŷ = σ(z ). If the sigmoid function underflows to zero, then taking the logarithm of ŷ yields negative infinity. 6.2.2.3 Softmax Units for Multinoulli Output Distributions Any time we wish to represent a probability distribution over a discrete variable with n possible values, we may use the softmax function. This can be seen as a generalization of the sigmoid function which was used to represent a probability distribution over a binary variable. Softmax functions are most often used as the output of a classifier, to represent the probability distribution over n different classes. More rarely, softmax functions can be used inside the model itself, if we wish the model to choose between one of n different options for some internal variable. In the case of binary variables, we wished to produce a single number ŷ P y . = ( = 1 ) | x (6.27) Because this number needed to lie between and , and because we wanted the 0 1 logarithm of the number to be well-behaved for gradient-based optimization of the log-likelihood, we chose to instead predict a number z = log P̃(y = 1 | x). Exponentiating and normalizing gave us a Bernoulli distribution controlled by the sigmoid function. 184
  • 201. CHAPTER 6. DEEP FEEDFORWARD NETWORKS To generalize to the case of a discrete variable with n values, we now need to produce a vector ŷ, with ˆ yi = P(y = i | x). We require not only that each element of ˆ yi be between and , but also that the entire vector sums to so that 0 1 1 it represents a valid probability distribution. The same approach that worked for the Bernoulli distribution generalizes to the multinoulli distribution. First, a linear layer predicts unnormalized log probabilities: z W =  h b + , (6.28) where zi = log ˜ P(y = i | x). The softmax function can then exponentiate and normalize to obtain the desired z ŷ. Formally, the softmax function is given by softmax( ) z i = exp(zi)  j exp(zj) . (6.29) As with the logistic sigmoid, the use of the exp function works very well when training the softmax to output a target value y using maximum log-likelihood. In this case, we wish to maximize log P (y = i; z) = log softmax(z)i. Defining the softmax in terms of exp is natural because the log in the log-likelihood can undo the of the softmax: exp log softmax( ) z i = zi − log  j exp(zj). (6.30) The first term of equation shows that the input 6.30 zi always has a direct contribution to the cost function. Because this term cannot saturate, we know that learning can proceed, even if the contribution of zi to the second term of equation becomes very small. When maximizing the log-likelihood, the first 6.30 term encourages zi to be pushed up, while the second term encourages all ofz to be pushed down. To gain some intuition for the second term, log  j exp(zj ), observe that this term can be roughly approximated by maxj zj. This approximation is based on the idea that exp(zk) is insignificant for any zk that is noticeably less than maxj zj. The intuition we can gain from this approximation is that the negative log-likelihood cost function always strongly penalizes the most active incorrect prediction. If the correct answer already has the largest input to the softmax, then the −zi term and the log  j exp(zj) ≈ maxj zj = zi terms will roughly cancel. This example will then contribute little to the overall training cost, which will be dominated by other examples that are not yet correctly classified. So far we have discussed only a single example. Overall, unregularized maximum likelihood will drive the model to learn parameters that drive the softmax to predict 185
  • 202. CHAPTER 6. DEEP FEEDFORWARD NETWORKS the fraction of counts of each outcome observed in the training set: softmax( ( ; )) z x θ i ≈ m j=1 1y( ) j =i,x( ) j =x m j=1 1x( ) j =x . (6.31) Because maximum likelihood is a consistent estimator, this is guaranteed to happen so long as the model family is capable of representing the training distribution. In practice, limited model capacity and imperfect optimization will mean that the model is only able to approximate these fractions. Many objective functions other than the log-likelihood do not work as well with the softmax function. Specifically, objective functions that do not use a log to undo the exp of the softmax fail to learn when the argument to the exp becomes very negative, causing the gradient to vanish. In particular, squared error is a poor loss function for softmax units, and can fail to train the model to change its output, even when the model makes highly confident incorrect predictions ( , Bridle 1990). To understand why these other loss functions can fail, we need to examine the softmax function itself. Like the sigmoid, the softmax activation can saturate. The sigmoid function has a single output that saturates when its input is extremely negative or extremely positive. In the case of the softmax, there are multiple output values. These output values can saturate when the differences between input values become extreme. When the softmax saturates, many cost functions based on the softmax also saturate, unless they are able to invert the saturating activating function. To see that the softmax function responds to the difference between its inputs, observe that the softmax output is invariant to adding the same scalar to all of its inputs: softmax( ) = softmax( + ) z z c . (6.32) Using this property, we can derive a numerically stable variant of the softmax: softmax( ) = softmax( max z z − i zi). (6.33) The reformulated version allows us to evaluate softmax with only small numerical errors even when z contains extremely large or extremely negative numbers. Ex- amining the numerically stable variant, we see that the softmax function is driven by the amount that its arguments deviate from maxi zi. An output softmax(z)i saturates to when the corresponding input is maximal 1 (zi = maxi zi ) and zi is much greater than all of the other inputs. The output softmax(z)i can also saturate to when 0 zi is not maximal and the maximum is much greater. This is a generalization of the way that sigmoid units saturate, and 186
  • 203. CHAPTER 6. DEEP FEEDFORWARD NETWORKS can cause similar difficulties for learning if the loss function is not designed to compensate for it. The argument z to the softmax function can be produced in two different ways. The most common is simply to have an earlier layer of the neural network output every element of z, as described above using the linear layer z = W h +b. While straightforward, this approach actually overparametrizes the distribution. The constraint that the n outputs must sum to means that only 1 n − 1 parameters are necessary; the probability of the n-th value may be obtained by subtracting the first n− 1 1 probabilities from . We can thus impose a requirement that one element of z be fixed. For example, we can require that zn = 0. Indeed, this is exactly what the sigmoid unit does. Defining P (y = 1 | x) = σ(z) is equivalent to defining P(y = 1 | x) = softmax(z)1 with a two-dimensional z and z1 = 0. Both the n − 1 argument and the n argument approaches to the softmax can describe the same set of probability distributions, but have different learning dynamics. In practice, there is rarely much difference between using the overparametrized version or the restricted version, and it is simpler to implement the overparametrized version. From a neuroscientific point of view, it is interesting to think of the softmax as a way to create a form of competition between the units that participate in it: the softmax outputs always sum to 1 so an increase in the value of one unit necessarily corresponds to a decrease in the value of others. This is analogous to the lateral inhibition that is believed to exist between nearby neurons in the cortex. At the extreme (when the difference between the maximal ai and the others is large in magnitude) it becomes a form of winner-take-all (one of the outputs is nearly 1 and the others are nearly 0). The name “softmax” can be somewhat confusing. The function is more closely related to the arg max function than the max function. The term “soft” derives from the fact that the softmax function is continuous and differentiable. The arg max function, with its result represented as a one-hot vector, is not continuous or differentiable. The softmax function thus provides a “softened” version of the arg max. The corresponding soft version of the maximum function is softmax(z)z. It would perhaps be better to call the softmax function “softargmax,” but the current name is an entrenched convention. 6.2.2.4 Other Output Types The linear, sigmoid, and softmax output units described above are the most common. Neural networks can generalize to almost any kind of output layer that we wish. The principle of maximum likelihood provides a guide for how to design 187
  • 204. CHAPTER 6. DEEP FEEDFORWARD NETWORKS a good cost function for nearly any kind of output layer. In general, if we define a conditional distribution p(y x | ; θ), the principle of maximum likelihood suggests we use as our cost function. − | log ( p y x θ ; ) In general, we can think of the neural network as representing a function f(x;θ). The outputs of this function are not direct predictions of the value y. Instead, f(x;θ) = ω provides the parameters for a distribution over y. Our loss function can then be interpreted as . − log ( ; ( )) p y ω x For example, we may wish to learn the variance of a conditional Gaussian for y, given x. In the simple case, where the variance σ2 is a constant, there is a closed form expression because the maximum likelihood estimator of variance is simply the empirical mean of the squared difference between observationsy and their expected value. A computationally more expensive approach that does not require writing special-case code is to simply include the variance as one of the properties of the distribution p(y | x) that is controlled by ω = f(x; θ). The negative log-likelihood − log p(y;ω(x)) will then provide a cost function with the appropriate terms necessary to make our optimization procedure incrementally learn the variance. In the simple case where the standard deviation does not depend on the input, we can make a new parameter in the network that is copied directly into ω. This new parameter might be σ itself or could be a parameter v representing σ2 or it could be a parameter β representing 1 σ2 , depending on how we choose to parametrize the distribution. We may wish our model to predict a different amount of variance in y for different values of x. This is called a heteroscedastic model. In the heteroscedastic case, we simply make the specification of the variance be one of the values output by f(x;θ). A typical way to do this is to formulate the Gaussian distribution using precision, rather than variance, as described in equation . 3.22 In the multivariate case it is most common to use a diagonal precision matrix diag (6.34) ( ) β . This formulation works well with gradient descent because the formula for the log-likelihood of the Gaussian distribution parametrized by β involves only mul- tiplication by βi and addition of log βi. The gradient of multiplication, addition, and logarithm operations is well-behaved. By comparison, if we parametrized the output in terms of variance, we would need to use division. The division function becomes arbitrarily steep near zero. While large gradients can help learning, arbitrarily large gradients usually result in instability. If we parametrized the output in terms of standard deviation, the log-likelihood would still involve division, and would also involve squaring. The gradient through the squaring operation can vanish near zero, making it difficult to learn parameters that are squared. 188
  • 205. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Regardless of whether we use standard deviation, variance, or precision, we must ensure that the covariance matrix of the Gaussian is positive definite. Because the eigenvalues of the precision matrix are the reciprocals of the eigenvalues of the covariance matrix, this is equivalent to ensuring that the precision matrix is positive definite. If we use a diagonal matrix, or a scalar times the diagonal matrix, then the only condition we need to enforce on the output of the model is positivity. If we suppose that a is the raw activation of the model used to determine the diagonal precision, we can use the softplus function to obtain a positive precision vector: β = ζ(a). This same strategy applies equally if using variance or standard deviation rather than precision or if using a scalar times identity rather than diagonal matrix. It is rare to learn a covariance or precision matrix with richer structure than diagonal. If the covariance is full and conditional, then a parametrization must be chosen that guarantees positive-definiteness of the predicted covariance matrix. This can be achieved by writing Σ( ) = ( ) x B x B ( ) x , whereB is an unconstrained square matrix. One practical issue if the matrix is full rank is that computing the likelihood is expensive, with a d d × matrix requiring O(d3 ) computation for the determinant and inverse of Σ(x) (or equivalently, and more commonly done, its eigendecomposition or that of ). B x ( ) We often want to perform multimodal regression, that is, to predict real values that come from a conditional distribution p(y x | ) that can have several different peaks in y space for the same value of x. In this case, a Gaussian mixture is a natural representation for the output ( , ; , ). Jacobs et al. 1991 Bishop 1994 Neural networks with Gaussian mixtures as their output are often called mixture density networks. A Gaussian mixture output with n components is defined by the conditional probability distribution p( ) = y x | n  i=1 p i ( = c | N x) ( ; y µ( ) i ( ) x , Σ( ) i ( )) x . (6.35) The neural network must have three outputs: a vector defining p(c = i | x), a matrix providing µ( ) i (x) for all i, and a tensor providing Σ( ) i ( x) for all i. These outputs must satisfy different constraints: 1. Mixture components p(c = i | x): these form a multinoulli distribution over the n different components associated with latent variable1 c, and can 1 We consider c to be latent because we do not observe it in the data: given input x and target y, it is not possible to know with certainty which Gaussian component was responsible for y, but we can imagine that y was generated by picking one of them, and make that unobserved choice a random variable. 189
  • 206. CHAPTER 6. DEEP FEEDFORWARD NETWORKS typically be obtained by a softmax over an n-dimensional vector, to guarantee that these outputs are positive and sum to 1. 2. Means µ( ) i (x): these indicate the center or mean associated with the i-th Gaussian component, and are unconstrained (typically with no nonlinearity at all for these output units). If y is a d-vector, then the network must output an n d × matrix containing all n of these d-dimensional vectors. Learning these means with maximum likelihood is slightly more complicated than learning the means of a distribution with only one output mode. We only want to update the mean for the component that actually produced the observation. In practice, we do not know which component produced each observation. The expression for the negative log-likelihood naturally weights each example’s contribution to the loss for each component by the probability that the component produced the example. 3. Covariances Σ( ) i (x): these specify the covariance matrix for each component i. As when learning a single Gaussian component, we typically use a diagonal matrix to avoid needing to compute determinants. As with learning the means of the mixture, maximum likelihood is complicated by needing to assign partial responsibility for each point to each mixture component. Gradient descent will automatically follow the correct process if given the correct specification of the negative log-likelihood under the mixture model. It has been reported that gradient-based optimization of conditional Gaussian mixtures (on the output of neural networks) can be unreliable, in part because one gets divisions (by the variance) which can be numerically unstable (when some variance gets to be small for a particular example, yielding very large gradients). One solution is to clip gradients (see section ) while another is to scale 10.11.1 the gradients heuristically ( , ). Murray and Larochelle 2014 Gaussian mixture outputs are particularly effective in generative models of speech (Schuster 1999 , ) or movements of physical objects (Graves 2013 , ). The mixture density strategy gives a way for the network to represent multiple output modes and to control the variance of its output, which is crucial for obtaining a high degree of quality in these real-valued domains. An example of a mixture density network is shown in figure . 6.4 In general, we may wish to continue to model larger vectors y containing more variables, and to impose richer and richer structures on these output variables. For example, we may wish for our neural network to output a sequence of characters that forms a sentence. In these cases, we may continue to use the principle of maximum likelihood applied to our model p( y; ω(x)), but the model we use 190
  • 207. CHAPTER 6. DEEP FEEDFORWARD NETWORKS x y Figure 6.4: Samples drawn from a neural network with a mixture density output layer. The input x is sampled from a uniform distribution and the output y is sampled from pmodel(y x | ). The neural network is able to learn nonlinear mappings from the input to the parameters of the output distribution. These parameters include the probabilities governing which of three mixture components will generate the output as well as the parameters for each mixture component. Each mixture component is Gaussian with predicted mean and variance. All of these aspects of the output distribution are able to vary with respect to the input , and to do so in nonlinear ways. x to describe y becomes complex enough to be beyond the scope of this chapter. Chapter describes how to use recurrent neural networks to define such models 10 over sequences, and part describes advanced techniques for modeling arbitrary III probability distributions. 6.3 Hidden Units So far we have focused our discussion on design choices for neural networks that are common to most parametric machine learning models trained with gradient- based optimization. Now we turn to an issue that is unique to feedforward neural networks: how to choose the type of hidden unit to use in the hidden layers of the model. The design of hidden units is an extremely active area of research and does not yet have many definitive guiding theoretical principles. Rectified linear units are an excellent default choice of hidden unit. Many other types of hidden units are available. It can be difficult to determine when to use which kind (though rectified linear units are usually an acceptable choice). We 191
  • 208. CHAPTER 6. DEEP FEEDFORWARD NETWORKS describe here some of the basic intuitions motivating each type of hidden units. These intuitions can help decide when to try out each of these units. It is usually impossible to predict in advance which will work best. The design process consists of trial and error, intuiting that a kind of hidden unit may work well, and then training a network with that kind of hidden unit and evaluating its performance on a validation set. Some of the hidden units included in this list are not actually differentiable at all input points. For example, the rectified linear function g(z) = max{0, z} is not differentiable at z = 0. This may seem like it invalidates g for use with a gradient- based learning algorithm. In practice, gradient descent still performs well enough for these models to be used for machine learning tasks. This is in part because neural network training algorithms do not usually arrive at a local minimum of the cost function, but instead merely reduce its value significantly, as shown in figure . These ideas will be described further in chapter . Because we do not 4.3 8 expect training to actually reach a point where the gradient is 0, it is acceptable for the minima of the cost function to correspond to points with undefined gradient. Hidden units that are not differentiable are usually non-differentiable at only a small number of points. In general, a function g(z) has a left derivative defined by the slope of the function immediately to the left of z and a right derivative defined by the slope of the function immediately to the right of z. A function is differentiable at z only if both the left derivative and the right derivative are defined and equal to each other. The functions used in the context of neural networks usually have defined left derivatives and defined right derivatives. In the case of g(z) = max{0, z}, the left derivative at z = 0 0 is and the right derivative is . Software implementations of neural network training usually return one of 1 the one-sided derivatives rather than reporting that the derivative is undefined or raising an error. This may be heuristically justified by observing that gradient- based optimization on a digital computer is subject to numerical error anyway. When a function is asked to evaluate g(0), it is very unlikely that the underlying value truly was . Instead, it was likely to be some small value 0  that was rounded to . In some contexts, more theoretically pleasing justifications are available, but 0 these usually do not apply to neural network training. The important point is that in practice one can safely disregard the non-differentiability of the hidden unit activation functions described below. Unless indicated otherwise, most hidden units can be described as accepting a vector of inputs x, computing an affine transformation z = W x + b, and then applying an element-wise nonlinear function g(z). Most hidden units are distinguished from each other only by the choice of the form of the activation function . g( ) z 192
  • 209. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 6.3.1 Rectified Linear Units and Their Generalizations Rectified linear units use the activation function . g z , z ( ) = max 0 { } Rectified linear units are easy to optimize because they are so similar to linear units. The only difference between a linear unit and a rectified linear unit is that a rectified linear unit outputs zero across half its domain. This makes the derivatives through a rectified linear unit remain large whenever the unit is active. The gradients are not only large but also consistent. The second derivative of the rectifying operation is almost everywhere, and the derivative of the rectifying 0 operation is everywhere that the unit is active. This means that the gradient 1 direction is far more useful for learning than it would be with activation functions that introduce second-order effects. Rectified linear units are typically used on top of an affine transformation: h W = ( g  x b + ). (6.36) When initializing the parameters of the affine transformation, it can be a good practice to set all elements of b to a small, positive value, such as 0.1. This makes it very likely that the rectified linear units will be initially active for most inputs in the training set and allow the derivatives to pass through. Several generalizations of rectified linear units exist. Most of these general- izations perform comparably to rectified linear units and occasionally perform better. One drawback to rectified linear units is that they cannot learn via gradient- based methods on examples for which their activation is zero. A variety of generalizations of rectified linear units guarantee that they receive gradient every- where. Three generalizations of rectified linear units are based on using a non-zero slope αi when zi < 0: hi = g(z α , )i = max(0, zi) + αi min(0, zi ). Absolute value rectification fixes αi = −1 to obtain g(z) = | | z . It is used for object recognition from images ( , ), where it makes sense to seek features that are Jarrett et al. 2009 invariant under a polarity reversal of the input illumination. Other generalizations of rectified linear units are more broadly applicable. A leaky ReLU ( , Maas et al. 2013) fixes αi to a small value like 0.01 while a parametric ReLU or PReLU treats αi as a learnable parameter ( , ). He et al. 2015 Maxout units ( , ) generalize rectified linear units Goodfellow et al. 2013a further. Instead of applying an element-wise function g(z ), maxout units divide z into groups of k values. Each maxout unit then outputs the maximum element of 193
  • 210. CHAPTER 6. DEEP FEEDFORWARD NETWORKS one of these groups: g( ) z i = max j∈G( ) i zj (6.37) where G( ) i is the set of indices into the inputs for group i, {(i − 1)k + 1, . . . , ik}. This provides a way of learning a piecewise linear function that responds to multiple directions in the input space. x A maxout unit can learn a piecewise linear, convex function with up to k pieces. Maxout units can thus be seen as learning the activation function itself rather than just the relationship between units. With large enough k, a maxout unit can learn to approximate any convex function with arbitrary fidelity. In particular, a maxout layer with two pieces can learn to implement the same function of the input x as a traditional layer using the rectified linear activation function, absolute value rectification function, or the leaky or parametric ReLU, or can learn to implement a totally different function altogether. The maxout layer will of course be parametrized differently from any of these other layer types, so the learning dynamics will be different even in the cases where maxout learns to implement the same function of as one of the other layer types. x Each maxout unit is now parametrized by k weight vectors instead of just one, so maxout units typically need more regularization than rectified linear units. They can work well without regularization if the training set is large and the number of pieces per unit is kept low ( , ). Cai et al. 2013 Maxout units have a few other benefits. In some cases, one can gain some sta- tistical and computational advantages by requiring fewer parameters. Specifically, if the features captured by n different linear filters can be summarized without losing information by taking the max over each group of k features, then the next layer can get by with times fewer weights. k Because each unit is driven by multiple filters, maxout units have some redun- dancy that helps them to resist a phenomenon called catastrophic forgetting in which neural networks forget how to perform tasks that they were trained on in the past ( , ). Goodfellow et al. 2014a Rectified linear units and all of these generalizations of them are based on the principle that models are easier to optimize if their behavior is closer to linear. This same general principle of using linear behavior to obtain easier optimization also applies in other contexts besides deep linear networks. Recurrent networks can learn from sequences and produce a sequence of states and outputs. When training them, one needs to propagate information through several time steps, which is much easier when some linear computations (with some directional derivatives being of magnitude near 1) are involved. One of the best-performing recurrent network 194
  • 211. CHAPTER 6. DEEP FEEDFORWARD NETWORKS architectures, the LSTM, propagates information through time via summation—a particular straightforward kind of such linear activation. This is discussed further in section . 10.10 6.3.2 Logistic Sigmoid and Hyperbolic Tangent Prior to the introduction of rectified linear units, most neural networks used the logistic sigmoid activation function g z σ z ( ) = ( ) (6.38) or the hyperbolic tangent activation function g z z . ( ) = tanh( ) (6.39) These activation functions are closely related because . tanh( ) = 2 (2 ) 1 z σ z − We have already seen sigmoid units as output units, used to predict the probability that a binary variable is . Unlike piecewise linear units, sigmoidal 1 units saturate across most of their domain—they saturate to a high value when z is very positive, saturate to a low value when z is very negative, and are only strongly sensitive to their input when z is near 0. The widespread saturation of sigmoidal units can make gradient-based learning very difficult. For this reason, their use as hidden units in feedforward networks is now discouraged. Their use as output units is compatible with the use of gradient-based learning when an appropriate cost function can undo the saturation of the sigmoid in the output layer. When a sigmoidal activation function must be used, the hyperbolic tangent activation function typically performs better than the logistic sigmoid. It resembles the identity function more closely, in the sense that tanh(0) = 0 while σ(0) = 1 2. Because tanh is similar to the identity function near , training a deep neural 0 network ŷ = w tanh(U  tanh(Vx)) resembles training a linear model ŷ = w UVx so long as the activations of the network can be kept small. This makes training the network easier. tanh Sigmoidal activation functions are more common in settings other than feed- forward networks. Recurrent networks, many probabilistic models, and some autoencoders have additional requirements that rule out the use of piecewise linear activation functions and make sigmoidal units more appealing despite the drawbacks of saturation. 195
  • 212. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 6.3.3 Other Hidden Units Many other types of hidden units are possible, but are used less frequently. In general, a wide variety of differentiable functions perform perfectly well. Many unpublished activation functions perform just as well as the popular ones. To provide a concrete example, the authors tested a feedforward network using h = cos(Wx + b) on the MNIST dataset and obtained an error rate of less than 1%, which is competitive with results obtained using more conventional activation functions. During research and development of new techniques, it is common to test many different activation functions and find that several variations on standard practice perform comparably. This means that usually new hidden unit types are published only if they are clearly demonstrated to provide a significant improvement. New hidden unit types that perform roughly comparably to known types are so common as to be uninteresting. It would be impractical to list all of the hidden unit types that have appeared in the literature. We highlight a few especially useful and distinctive ones. One possibility is to not have an activation g(z) at all. One can also think of this as using the identity function as the activation function. We have already seen that a linear unit can be useful as the output of a neural network. It may also be used as a hidden unit. If every layer of the neural network consists of only linear transformations, then the network as a whole will be linear. However, it is acceptable for some layers of the neural network to be purely linear. Consider a neural network layer with n inputs and p outputs, h = g(W x + b). We may replace this with two layers, with one layer using weight matrix U and the other using weight matrix V . If the first layer has no activation function, then we have essentially factored the weight matrix of the original layer based on W . The factored approach is to compute h = g(V  U x + b). If U produces q outputs, then U and V together contain only (n + p)q parameters, while W contains np parameters. For small q, this can be a considerable saving in parameters. It comes at the cost of constraining the linear transformation to be low-rank, but these low-rank relationships are often sufficient. Linear hidden units thus offer an effective way of reducing the number of parameters in a network. Softmax units are another kind of unit that is usually used as an output (as described in section ) but may sometimes be used as a hidden unit. Softmax 6.2.2.3 units naturally represent a probability distribution over a discrete variable with k possible values, so they may be used as a kind of switch. These kinds of hidden units are usually only used in more advanced architectures that explicitly learn to manipulate memory, described in section . 10.12 196
  • 213. CHAPTER 6. DEEP FEEDFORWARD NETWORKS A few other reasonably common hidden unit types include: • Radial basis function or RBF unit: hi = exp  − 1 σ2 i ||W:,i − || x 2  . This function becomes more active as x approaches a template W:,i. Because it saturates to for most , it can be difficult to optimize. 0 x • Softplus: g(a) = ζ(a) = log(1+ea). This is a smooth version of the rectifier, introduced by ( ) for function approximation and by Dugas et al. 2001 Nair and Hinton 2010 ( ) for the conditional distributions of undirected probabilistic models. ( ) compared the softplus and rectifier and found Glorot et al. 2011a better results with the latter. The use of the softplus is generally discouraged. The softplus demonstrates that the performance of hidden unit types can be very counterintuitive—one might expect it to have an advantage over the rectifier due to being differentiable everywhere or due to saturating less completely, but empirically it does not. • Hard tanh: this is shaped similarly to the tanh and the rectifier but unlike the latter, it is bounded, g(a) = max(−1, min(1, a)). It was introduced by ( ). Collobert 2004 Hidden unit design remains an active area of research and many useful hidden unit types remain to be discovered. 6.4 Architecture Design Another key design consideration for neural networks is determining the architecture. The word architecture refers to the overall structure of the network: how many units it should have and how these units should be connected to each other. Most neural networks are organized into groups of units called layers. Most neural network architectures arrange these layers in a chain structure, with each layer being a function of the layer that preceded it. In this structure, the first layer is given by h(1) = g(1)  W(1) x b + (1)  , (6.40) the second layer is given by h(2) = g(2)  W(2) h(1) + b(2)  , (6.41) and so on. 197
  • 214. CHAPTER 6. DEEP FEEDFORWARD NETWORKS In these chain-based architectures, the main architectural considerations are to choose the depth of the network and the width of each layer. As we will see, a network with even one hidden layer is sufficient to fit the training set. Deeper networks often are able to use far fewer units per layer and far fewer parameters and often generalize to the test set, but are also often harder to optimize. The ideal network architecture for a task must be found via experimentation guided by monitoring the validation set error. 6.4.1 Universal Approximation Properties and Depth A linear model, mapping from features to outputs via matrix multiplication, can by definition represent only linear functions. It has the advantage of being easy to train because many loss functions result in convex optimization problems when applied to linear models. Unfortunately, we often want to learn nonlinear functions. At first glance, we might presume that learning a nonlinear function requires designing a specialized model family for the kind of nonlinearity we want to learn. Fortunately, feedforward networks with hidden layers provide a universal approxi- mation framework. Specifically, the universal approximation theorem (Hornik et al., ; , ) states that a feedforward network with a linear output 1989 Cybenko 1989 layer and at least one hidden layer with any “squashing” activation function (such as the logistic sigmoid activation function) can approximate any Borel measurable function from one finite-dimensional space to another with any desired non-zero amount of error, provided that the network is given enough hidden units. The derivatives of the feedforward network can also approximate the derivatives of the function arbitrarily well ( , ). The concept of Borel measurability Hornik et al. 1990 is beyond the scope of this book; for our purposes it suffices to say that any continuous function on a closed and bounded subset of Rn is Borel measurable and therefore may be approximated by a neural network. A neural network may also approximate any function mapping from any finite dimensional discrete space to another. While the original theorems were first stated in terms of units with activation functions that saturate both for very negative and for very positive arguments, universal approximation theorems have also been proved for a wider class of activation functions, which includes the now commonly used rectified linear unit ( , ). Leshno et al. 1993 The universal approximation theorem means that regardless of what function we are trying to learn, we know that a large MLP will be able to represent this function. However, we are not guaranteed that the training algorithm will be able to learn that function. Even if the MLP is able to represent the function, learning can fail for two different reasons. First, the optimization algorithm used for training 198
  • 215. CHAPTER 6. DEEP FEEDFORWARD NETWORKS may not be able to find the value of the parameters that corresponds to the desired function. Second, the training algorithm might choose the wrong function due to overfitting. Recall from section that the “no free lunch” theorem shows that 5.2.1 there is no universally superior machine learning algorithm. Feedforward networks provide a universal system for representing functions, in the sense that, given a function, there exists a feedforward network that approximates the function. There is no universal procedure for examining a training set of specific examples and choosing a function that will generalize to points not in the training set. The universal approximation theorem says that there exists a network large enough to achieve any degree of accuracy we desire, but the theorem does not say how large this network will be. ( ) provides some bounds on the Barron 1993 size of a single-layer network needed to approximate a broad class of functions. Unfortunately, in the worse case, an exponential number of hidden units (possibly with one hidden unit corresponding to each input configuration that needs to be distinguished) may be required. This is easiest to see in the binary case: the number of possible binary functions on vectors v ∈ {0, 1}n is 22n and selecting one such function requires 2n bits, which will in general require O(2n) degrees of freedom. In summary, a feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly. In many circumstances, using deeper models can reduce the number of units required to represent the desired function and can reduce the amount of generalization error. There exist families of functions which can be approximated efficiently by an architecture with depth greater than some valued, but which require a much larger model if depth is restricted to be less than or equal to d. In many cases, the number of hidden units required by the shallow model is exponential in n. Such results were first proved for models that do not resemble the continuous, differentiable neural networks used for machine learning, but have since been extended to these models. The first results were for circuits of logic gates ( , ). Later Håstad 1986 work extended these results to linear threshold units with non-negative weights ( , ; , ), and then to networks with Håstad and Goldmann 1991 Hajnal et al. 1993 continuous-valued activations ( , ; , ). Many modern Maass 1992 Maass et al. 1994 neural networks use rectified linear units. ( ) demonstrated Leshno et al. 1993 that shallow networks with a broad family of non-polynomial activation functions, including rectified linear units, have universal approximation properties, but these results do not address the questions of depth or efficiency—they specify only that a sufficiently wide rectifier network could represent any function. Montufar et al. 199
  • 216. CHAPTER 6. DEEP FEEDFORWARD NETWORKS ( ) showed that functions representable with a deep rectifier net can require 2014 an exponential number of hidden units with a shallow (one hidden layer) network. More precisely, they showed that piecewise linear networks (which can be obtained from rectifier nonlinearities or maxout units) can represent functions with a number of regions that is exponential in the depth of the network. Figure illustrates how 6.5 a network with absolute value rectification creates mirror images of the function computed on top of some hidden unit, with respect to the input of that hidden unit. Each hidden unit specifies where to fold the input space in order to create mirror responses (on both sides of the absolute value nonlinearity). By composing these folding operations, we obtain an exponentially large number of piecewise linear regions which can capture all kinds of regular (e.g., repeating) patterns. Figure 6.5: An intuitive, geometric explanation of the exponential advantage of deeper rectifier networks formally by ( ). Montufar et al. 2014 (Left)An absolute value rectification unit has the same output for every pair of mirror points in its input. The mirror axis of symmetry is given by the hyperplane defined by the weights and bias of the unit. A function computed on top of that unit (the green decision surface) will be a mirror image of a simpler pattern across that axis of symmetry. The function can be obtained (Center) by folding the space around the axis of symmetry. Another repeating pattern can (Right) be folded on top of the first (by another downstream unit) to obtain another symmetry (which is now repeated four times, with two hidden layers). Figure reproduced with permission from ( ). Montufar et al. 2014 More precisely, the main theorem in ( ) states that the Montufar et al. 2014 number of linear regions carved out by a deep rectifier network with d inputs, depth , and units per hidden layer, is l n O  n d d l ( −1) nd  , (6.42) i.e., exponential in the depth . In the case of maxout networks with filters per l k unit, the number of linear regions is O  k( 1)+ l− d  . (6.43) 200
  • 217. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Of course, there is no guarantee that the kinds of functions we want to learn in applications of machine learning (and in particular for AI) share such a property. We may also want to choose a deep model for statistical reasons. Any time we choose a specific machine learning algorithm, we are implicitly stating some set of prior beliefs we have about what kind of function the algorithm should learn. Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. Alternately, we can interpret the use of a deep architecture as expressing a belief that the function we want to learn is a computer program consisting of multiple steps, where each step makes use of the previous step’s output. These intermediate outputs are not necessarily factors of variation, but can instead be analogous to counters or pointers that the network uses to organize its internal processing. Empirically, greater depth does seem to result in better generalization for a wide variety of tasks ( , ; , ; , ; Bengio et al. 2007 Erhan et al. 2009 Bengio 2009 Mesnil 2011 Ciresan 2012 Krizhevsky 2012 Sermanet et al., ; et al., ; et al., ; et al., 2013 Farabet 2013 Couprie 2013 Kahou 2013 Goodfellow ; et al., ; et al., ; et al., ; et al. et al. , ; 2014d Szegedy , ). See figure and figure for examples of 2014a 6.6 6.7 some of these empirical results. This suggests that using deep architectures does indeed express a useful prior over the space of functions the model learns. 6.4.2 Other Architectural Considerations So far we have described neural networks as being simple chains of layers, with the main considerations being the depth of the network and the width of each layer. In practice, neural networks show considerably more diversity. Many neural network architectures have been developed for specific tasks. Specialized architectures for computer vision called convolutional networks are described in chapter . Feedforward networks may also be generalized to the 9 recurrent neural networks for sequence processing, described in chapter , which 10 have their own architectural considerations. In general, the layers need not be connected in a chain, even though this is the most common practice. Many architectures build a main chain but then add extra architectural features to it, such as skip connections going from layer i to layer i+ 2 or higher. These skip connections make it easier for the gradient to flow from output layers to layers nearer the input. 201
  • 218. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 3 4 5 6 7 8 9 10 11 92 0 . 92 5 . 93 0 . 93 5 . 94 0 . 94 5 . 95 0 . 95 5 . 96 0 . 96 5 . Test accuracy (percent) Figure 6.6: Empirical results showing that deeper networks generalize better when used to transcribe multi-digit numbers from photographs of addresses. Data from Goodfellow et al. ( ). The test set accuracy consistently increases with increasing depth. See 2014d figure for a control experiment demonstrating that other increases to the model size 6.7 do not yield the same effect. Another key consideration of architecture design is exactly how to connect a pair of layers to each other. In the default neural network layer described by a linear transformation via a matrix W , every input unit is connected to every output unit. Many specialized networks in the chapters ahead have fewer connections, so that each unit in the input layer is connected to only a small subset of units in the output layer. These strategies for reducing the number of connections reduce the number of parameters and the amount of computation required to evaluate the network, but are often highly problem-dependent. For example, convolutional networks, described in chapter , use specialized patterns of sparse connections 9 that are very effective for computer vision problems. In this chapter, it is difficult to give much more specific advice concerning the architecture of a generic neural network. Subsequent chapters develop the particular architectural strategies that have been found to work well for different application domains. 202
  • 219. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 0 0 0 2 0 4 0 6 0 8 1 0 . . . . . . Number of parameters ×108 91 92 93 94 95 96 97 Test accuracy (percent) 3, convolutional 3, fully connected 11, convolutional Figure 6.7: Deeper models tend to perform better. This is not merely because the model is larger. This experiment from Goodfellow 2014d et al. ( ) shows that increasing the number of parameters in layers of convolutional networks without increasing their depth is not nearly as effective at increasing test set performance. The legend indicates the depth of network used to make each curve and whether the curve represents variation in the size of the convolutional or the fully connected layers. We observe that shallow models in this context overfit at around 20 million parameters while deep ones can benefit from having over 60 million. This suggests that using a deep model expresses a useful preference over the space of functions the model can learn. Specifically, it expresses a belief that the function should consist of many simpler functions composed together. This could result either in learning a representation that is composed in turn of simpler representations (e.g., corners defined in terms of edges) or in learning a program with sequentially dependent steps (e.g., first locate a set of objects, then segment them from each other, then recognize them). 203
  • 220. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 6.5 Back-Propagation and Other Differentiation Algo- rithms When we use a feedforward neural network to accept an input x and produce an output ŷ, information flows forward through the network. The inputs x provide the initial information that then propagates up to the hidden units at each layer and finally produces ŷ . This is called forward propagation. During training, forward propagation can continue onward until it produces a scalar cost J(θ). The back-propagation algorithm ( , ), often simply called Rumelhart et al. 1986a backprop, allows the information from the cost to then flow backwards through the network, in order to compute the gradient. Computing an analytical expression for the gradient is straightforward, but numerically evaluating such an expression can be computationally expensive. The back-propagation algorithm does so using a simple and inexpensive procedure. The term back-propagation is often misunderstood as meaning the whole learning algorithm for multi-layer neural networks. Actually, back-propagation refers only to the method for computing the gradient, while another algorithm, such as stochastic gradient descent, is used to perform learning using this gradient. Furthermore, back-propagation is often misunderstood as being specific to multi- layer neural networks, but in principle it can compute derivatives of any function (for some functions, the correct response is to report that the derivative of the function is undefined). Specifically, we will describe how to compute the gradient ∇xf(x y , ) for an arbitrary function f , wherex is a set of variables whose derivatives are desired, and y is an additional set of variables that are inputs to the function but whose derivatives are not required. In learning algorithms, the gradient we most often require is the gradient of the cost function with respect to the parameters, ∇θ J(θ). Many machine learning tasks involve computing other derivatives, either as part of the learning process, or to analyze the learned model. The back- propagation algorithm can be applied to these tasks as well, and is not restricted to computing the gradient of the cost function with respect to the parameters. The idea of computing derivatives by propagating information through a network is very general, and can be used to compute values such as the Jacobian of a function f with multiple outputs. We restrict our description here to the most commonly used case where has a single output. f 204
  • 221. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 6.5.1 Computational Graphs So far we have discussed neural networks with a relatively informal graph language. To describe the back-propagation algorithm more precisely, it is helpful to have a more precise language. computational graph Many ways of formalizing computation as graphs are possible. Here, we use each node in the graph to indicate a variable. The variable may be a scalar, vector, matrix, tensor, or even a variable of another type. To formalize our graphs, we also need to introduce the idea of an operation. An operation is a simple function of one or more variables. Our graph language is accompanied by a set of allowable operations. Functions more complicated than the operations in this set may be described by composing many operations together. Without loss of generality, we define an operation to return only a single output variable. This does not lose generality because the output variable can have multiple entries, such as a vector. Software implementations of back-propagation usually support operations with multiple outputs, but we avoid this case in our description because it introduces many extra details that are not important to conceptual understanding. If a variable y is computed by applying an operation to a variable x, then we draw a directed edge from x to y. We sometimes annotate the output node with the name of the operation applied, and other times omit this label when the operation is clear from context. Examples of computational graphs are shown in figure . 6.8 6.5.2 Chain Rule of Calculus The chain rule of calculus (not to be confused with the chain rule of probability) is used to compute the derivatives of functions formed by composing other functions whose derivatives are known. Back-propagation is an algorithm that computes the chain rule, with a specific order of operations that is highly efficient. Let x be a real number, and let f and g both be functions mapping from a real number to a real number. Suppose that y = g(x) and z = f(g(x)) = f(y). Then the chain rule states that dz dx = dz dy dy dx . (6.44) We can generalize this beyond the scalar case. Suppose that x ∈ Rm, y ∈ Rn, 205
  • 222. CHAPTER 6. DEEP FEEDFORWARD NETWORKS z z x x y y (a) × x x w w (b) u(1) u(1) dot b b u(2) u(2) + ŷ ŷ σ (c) X X W W U (1) U (1) matmul b b U (2) U (2) + H H relu x x w w (d) ŷ ŷ dot λ λ u(1) u(1) sqr u(2) u(2) sum u(3) u(3) × Figure 6.8: Examples of computational graphs. The graph using the (a) × operation to compute z = xy. The graph for the logistic regression prediction (b) ŷ = σ  x w + b  . Some of the intermediate expressions do not have names in the algebraic expression but need names in the graph. We simply name the i-th such variable u( ) i . The (c) computational graph for the expression H = max{0, XW + b}, which computes a design matrix of rectified linear unit activations H given a design matrix containing a minibatch of inputs X . Examples a–c applied at most one operation to each variable, but it (d) is possible to apply more than one operation. Here we show a computation graph that applies more than one operation to the weights w of a linear regression model. The weights are used to make both the prediction ŷ and the weight decay penalty λ  i w2 i . 206
  • 223. CHAPTER 6. DEEP FEEDFORWARD NETWORKS g maps from Rm to Rn , and f maps from Rn to R. If y = g(x) and z = f(y), then ∂z ∂xi =  j ∂z ∂yj ∂yj ∂xi . (6.45) In vector notation, this may be equivalently written as ∇xz =  ∂y ∂x  ∇y z, (6.46) where ∂y ∂x is the Jacobian matrix of . n m × g From this we see that the gradient of a variable x can be obtained by multiplying a Jacobian matrix ∂y ∂x by a gradient ∇yz. The back-propagation algorithm consists of performing such a Jacobian-gradient product for each operation in the graph. Usually we do not apply the back-propagation algorithm merely to vectors, but rather to tensors of arbitrary dimensionality. Conceptually, this is exactly the same as back-propagation with vectors. The only difference is how the numbers are arranged in a grid to form a tensor. We could imagine flattening each tensor into a vector before we run back-propagation, computing a vector-valued gradient, and then reshaping the gradient back into a tensor. In this rearranged view, back-propagation is still just multiplying Jacobians by gradients. To denote the gradient of a value z with respect to a tensor X, we write ∇Xz, just as if X were a vector. The indices into X now have multiple coordinates—for example, a 3-D tensor is indexed by three coordinates. We can abstract this away by using a single variable i to represent the complete tuple of indices. For all possible index tuples i, (∇Xz)i gives ∂z ∂Xi . This is exactly the same as how for all possible integer indices i into a vector, (∇x z)i gives ∂z ∂xi . Using this notation, we can write the chain rule as it applies to tensors. If and , then Y X = ( g ) z f = ( ) Y ∇X z =  j (∇XYj ) ∂z ∂Yj . (6.47) 6.5.3 Recursively Applying the Chain Rule to Obtain Backprop Using the chain rule, it is straightforward to write down an algebraic expression for the gradient of a scalar with respect to any node in the computational graph that produced that scalar. However, actually evaluating that expression in a computer introduces some extra considerations. Specifically, many subexpressions may be repeated several times within the overall expression for the gradient. Any procedure that computes the gradient 207
  • 224. CHAPTER 6. DEEP FEEDFORWARD NETWORKS will need to choose whether to store these subexpressions or to recompute them several times. An example of how these repeated subexpressions arise is given in figure . In some cases, computing the same subexpression twice would simply 6.9 be wasteful. For complicated graphs, there can be exponentially many of these wasted computations, making a naive implementation of the chain rule infeasible. In other cases, computing the same subexpression twice could be a valid way to reduce memory consumption at the cost of higher runtime. We first begin by a version of the back-propagation algorithm that specifies the actual gradient computation directly (algorithm along with algorithm for the 6.2 6.1 associated forward computation), in the order it will actually be done and according to the recursive application of chain rule. One could either directly perform these computations or view the description of the algorithm as a symbolic specification of the computational graph for computing the back-propagation. However, this formulation does not make explicit the manipulation and the construction of the symbolic graph that performs the gradient computation. Such a formulation is presented below in section , with algorithm , where we also generalize to 6.5.6 6.5 nodes that contain arbitrary tensors. First consider a computational graph describing how to compute a single scalar u( ) n (say the loss on a training example). This scalar is the quantity whose gradient we want to obtain, with respect to the ni input nodes u(1) to u(ni). In other words we wish to compute ∂u( ) n ∂u( ) i for all i ∈ {1,2, . . . , ni}. In the application of back-propagation to computing gradients for gradient descent over parameters, u( ) n will be the cost associated with an example or a minibatch, while u(1) to u(ni) correspond to the parameters of the model. We will assume that the nodes of the graph have been ordered in such a way that we can compute their output one after the other, starting at u(ni +1) and going up to u( ) n . As defined in algorithm , each node 6.1 u( ) i is associated with an operation f( ) i and is computed by evaluating the function u( ) i = ( f A( ) i ) (6.48) where A( ) i is the set of all nodes that are parents of u( ) i . That algorithm specifies the forward propagation computation, which we could put in a graph G. In order to perform back-propagation, we can construct a computational graph that depends onG and adds to it an extra set of nodes. These form a subgraph B with one node per node of G. Computation in B proceeds in exactly the reverse of the order of computation in G, and each node of B computes the derivative ∂u( ) n ∂u( ) i associated with the forward graph node u( ) i . This is done 208
  • 225. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Algorithm 6.1 A procedure that performs the computations mapping ni inputs u(1) to u(ni) to an output u( ) n . This defines a computational graph where each node computes numerical value u( ) i by applying a function f ( ) i to the set of arguments A( ) i that comprises the values of previous nodes u( ) j , j < i, with j Pa ∈ (u( ) i ). The input to the computational graph is the vector x, and is set into the first ni nodes u(1) to u(ni ) . The output of the computational graph is read off the last (output) node u( ) n . for i , . . . , n = 1 i do u( ) i ← xi end for for i n = i + 1, . . . , n do A( ) i ← {u( ) j | ∈ j Pa u ( ( ) i )} u( ) i ← f ( ) i (A( ) i ) end for return u( ) n using the chain rule with respect to scalar output u( ) n : ∂u( ) n ∂u( ) j =  i j P a u : ∈ ( ( ) i ) ∂u( ) n ∂u( ) i ∂u( ) i ∂u( ) j (6.49) as specified by algorithm . The subgraph 6.2 B contains exactly one edge for each edge from node u( ) j to node u( ) i of G. The edge from u( ) j to u( ) i is associated with the computation of ∂u( ) i ∂u( ) j . In addition, a dot product is performed for each node, between the gradient already computed with respect to nodes u( ) i that are children of u( ) j and the vector containing the partial derivatives ∂u( ) i ∂u( ) j for the same children nodes u( ) i . To summarize, the amount of computation required for performing the back-propagation scales linearly with the number of edges in G, where the computation for each edge corresponds to computing a partial derivative (of one node with respect to one of its parents) as well as performing one multiplication and one addition. Below, we generalize this analysis to tensor-valued nodes, which is just a way to group multiple scalar values in the same node and enable more efficient implementations. The back-propagation algorithm is designed to reduce the number of common subexpressions without regard to memory. Specifically, it performs on the order of one Jacobian product per node in the graph. This can be seen from the fact that backprop (algorithm ) visits each edge from node 6.2 u( ) j to node u( ) i of the graph exactly once in order to obtain the associated partial derivative ∂u( ) i ∂u( ) j . 209
  • 226. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Algorithm 6.2 Simplified version of the back-propagation algorithm for computing the derivatives of u( ) n with respect to the variables in the graph. This example is intended to further understanding by showing a simplified case where all variables are scalars, and we wish to compute the derivatives with respect to u(1), . . . , u(ni ). This simplified version computes the derivatives of all nodes in the graph. The computational cost of this algorithm is proportional to the number of edges in the graph, assuming that the partial derivative associated with each edge requires a constant time. This is of the same order as the number of computations for the forward propagation. Each ∂u( ) i ∂u( ) j is a function of the parents u( ) j of u( ) i , thus linking the nodes of the forward graph to those added for the back-propagation graph. Run forward propagation (algorithm for this example) to obtain the activa- 6.1 tions of the network Initialize grad_table, a data structure that will store the derivatives that have been computed. The entry grad table _ [u( ) i ] will store the computed value of ∂u( ) n ∂u( ) i . grad table _ [u( ) n ] 1 ← for do j n = − 1 down to 1 The next line computes ∂u( ) n ∂u( ) j =  i j P a u : ∈ ( ( ) i ) ∂u( ) n ∂u( ) i ∂u ( ) i ∂u( ) j using stored values: grad table _ [u( ) j ] ←  i j P a u : ∈ ( ( ) i ) grad table _ [u( ) i ]∂u( ) i ∂u( ) j end for return {grad table _ [u( ) i ] = 1 | i , . . . , ni} Back-propagation thus avoids the exponential explosion in repeated subexpressions. However, other algorithms may be able to avoid more subexpressions by performing simplifications on the computational graph, or may be able to conserve memory by recomputing rather than storing some subexpressions. We will revisit these ideas after describing the back-propagation algorithm itself. 6.5.4 Back-Propagation Computation in Fully-Connected MLP To clarify the above definition of the back-propagation computation, let us consider the specific graph associated with a fully-connected multi-layer MLP. Algorithm first shows the forward propagation, which maps parameters to 6.3 the supervised loss L(ŷ y , ) associated with a single (input,target) training example ( ) x y , , with ŷ the output of the neural network when is provided in input. x Algorithm then shows the corresponding computation to be done for 6.4 210
  • 227. CHAPTER 6. DEEP FEEDFORWARD NETWORKS z z x x y y w w f f f Figure 6.9: A computational graph that results in repeated subexpressions when computing the gradient. Let w ∈ R be the input to the graph. We use the same function f : R R → as the operation that we apply at every step of a chain: x = f(w), y = f(x), z = f(y). To compute ∂z ∂w , we apply equation and obtain: 6.44 ∂z ∂w (6.50) = ∂z ∂y ∂y ∂x ∂x ∂w (6.51) =f ( ) y f ( ) x f ( ) w (6.52) =f ( ( ( ))) f f w f  ( ( )) f w f ( ) w (6.53) Equation suggests an implementation in which we compute the value of 6.52 f (w) only once and store it in the variable x. This is the approach taken by the back-propagation algorithm. An alternative approach is suggested by equation , where the subexpression 6.53 f(w) appears more than once. In the alternative approach,f(w) is recomputed each time it is needed. When the memory required to store the value of these expressions is low, the back-propagation approach of equation is clearly preferable because of its reduced 6.52 runtime. However, equation is also a valid implementation of the chain rule, and is 6.53 useful when memory is limited. 211
  • 228. CHAPTER 6. DEEP FEEDFORWARD NETWORKS applying the back-propagation algorithm to this graph. Algorithms and are demonstrations that are chosen to be simple and 6.3 6.4 straightforward to understand. However, they are specialized to one specific problem. Modern software implementations are based on the generalized form of back- propagation described in section below, which can accommodate any compu- 6.5.6 tational graph by explicitly manipulating a data structure for representing symbolic computation. Algorithm 6.3 Forward propagation through a typical deep neural network and the computation of the cost function. The loss L(ŷ y , ) depends on the output ŷ and on the target y (see section for examples of loss functions). To 6.2.1.1 obtain the total cost J, the loss may be added to a regularizer Ω(θ), where θ contains all the parameters (weights and biases). Algorithm shows how to 6.4 compute gradients of J with respect to parameters W and b. For simplicity, this demonstration uses only a single input example x. Practical applications should use a minibatch. See section for a more realistic demonstration. 6.5.7 Require: Network depth, l Require: W ( ) i , i , . . . , l , ∈ {1 } the weight matrices of the model Require: b( ) i , i , . . . , l , ∈ {1 } the bias parameters of the model Require: x, the input to process Require: y, the target output h(0) = x for do k , . . . , l = 1 a( ) k = b( ) k + W( ) k h( 1) k− h( ) k = ( f a( ) k ) end for ŷ h = ( ) l J L = (ŷ y , ) + Ω( ) λ θ 6.5.5 Symbol-to-Symbol Derivatives Algebraic expressions and computational graphs both operate on symbols, or variables that do not have specific values. These algebraic and graph-based representations are called symbolic representations. When we actually use or train a neural network, we must assign specific values to these symbols. We replace a symbolic input to the network x with a specific numeric value, such as [1 2 3 765 1 8] . , . , − .  . 212
  • 229. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Algorithm 6.4 Backward computation for the deep neural network of algo- rithm , which uses in addition to the input 6.3 x a target y. This computation yields the gradients on the activations a( ) k for each layer k, starting from the output layer and going backwards to the first hidden layer. From these gradients, which can be interpreted as an indication of how each layer’s output should change to reduce error, one can obtain the gradient on the parameters of each layer. The gradients on weights and biases can be immediately used as part of a stochas- tic gradient update (performing the update right after the gradients have been computed) or used with other gradient-based optimization methods. After the forward computation, compute the gradient on the output layer: g ← ∇ŷJ = ∇ŷL(ŷ y , ) for do k l, l , . . . , = − 1 1 Convert the gradient on the layer’s output into a gradient into the pre- nonlinearity activation (element-wise multiplication if is element-wise): f g ← ∇a( ) k J f = g   (a( ) k ) Compute gradients on weights and biases (including the regularization term, where needed): ∇b( ) k J λ = + g ∇b( ) k Ω( ) θ ∇W ( ) k J = g h( 1) k−  + λ∇W ( ) k Ω( ) θ Propagate the gradients w.r.t. the next lower-level hidden layer’s activations: g ← ∇h( 1) k− J = W( ) k  g end for 213
  • 230. CHAPTER 6. DEEP FEEDFORWARD NETWORKS z z x x y y w w f f f z z x x y y w w f f f dz dy dz dy f  dy dx dy dx f  dz dx dz dx × dx dw dx dw f  dz dw dz dw × Figure 6.10: An example of the symbol-to-symbol approach to computing derivatives. In this approach, the back-propagation algorithm does not need to ever access any actual specific numeric values. Instead, it adds nodes to a computational graph describing how to compute these derivatives. A generic graph evaluation engine can later compute the derivatives for any specific numeric values. (Left)In this example, we begin with a graph representing z = f (f(f (w))). We run the back-propagation algorithm, instructing (Right) it to construct the graph for the expression corresponding to dz dw . In this example, we do not explain how the back-propagation algorithm works. The purpose is only to illustrate what the desired result is: a computational graph with a symbolic description of the derivative. Some approaches to back-propagation take a computational graph and a set of numerical values for the inputs to the graph, then return a set of numerical values describing the gradient at those input values. We call this approach “symbol- to-number” differentiation. This is the approach used by libraries such as Torch ( , ) and Caffe ( , ). Collobert et al. 2011b Jia 2013 Another approach is to take a computational graph and add additional nodes to the graph that provide a symbolic description of the desired derivatives. This is the approach taken by Theano ( , ; , ) Bergstra et al. 2010 Bastien et al. 2012 and TensorFlow ( , ). An example of how this approach works Abadi et al. 2015 is illustrated in figure . The primary advantage of this approach is that 6.10 the derivatives are described in the same language as the original expression. Because the derivatives are just another computational graph, it is possible to run back-propagation again, differentiating the derivatives in order to obtain higher derivatives. Computation of higher-order derivatives is described in section . 6.5.10 We will use the latter approach and describe the back-propagation algorithm in 214
  • 231. CHAPTER 6. DEEP FEEDFORWARD NETWORKS terms of constructing a computational graph for the derivatives. Any subset of the graph may then be evaluated using specific numerical values at a later time. This allows us to avoid specifying exactly when each operation should be computed. Instead, a generic graph evaluation engine can evaluate every node as soon as its parents’ values are available. The description of the symbol-to-symbol based approach subsumes the symbol- to-number approach. The symbol-to-number approach can be understood as performing exactly the same computations as are done in the graph built by the symbol-to-symbol approach. The key difference is that the symbol-to-number approach does not expose the graph. 6.5.6 General Back-Propagation The back-propagation algorithm is very simple. To compute the gradient of some scalar z with respect to one of its ancestors x in the graph, we begin by observing that the gradient with respect to z is given by dz dz = 1. We can then compute the gradient with respect to each parent of z in the graph by multiplying the current gradient by the Jacobian of the operation that produced z. We continue multiplying by Jacobians traveling backwards through the graph in this way until we reach x. For any node that may be reached by going backwards from z through two or more paths, we simply sum the gradients arriving from different paths at that node. More formally, each node in the graph G corresponds to a variable. To achieve maximum generality, we describe this variable as being a tensor V. Tensor can in general have any number of dimensions. They subsume scalars, vectors, and matrices. We assume that each variable is associated with the following subroutines: V • get operation _ (V): This returns the operation that computes V, repre- sented by the edges coming into V in the computational graph. For example, there may be a Python or C++ class representing the matrix multiplication operation, and the get_operation function. Suppose we have a variable that is created by matrix multiplication, C = AB. Then get operation _ (V) returns a pointer to an instance of the corresponding C++ class. • get consumers _ (V, G): This returns the list of variables that are children of V in the computational graph . G • G get inputs _ (V, ): This returns the list of variables that are parents of V in the computational graph . G 215
  • 232. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Each operation op is also associated with a bprop operation. This bprop operation can compute a Jacobian-vector product as described by equation . 6.47 This is how the back-propagation algorithm is able to achieve great generality. Each operation is responsible for knowing how to back-propagate through the edges in the graph that it participates in. For example, we might use a matrix multiplication operation to create a variable C = AB. Suppose that the gradient of a scalar z with respect to C is given by G. The matrix multiplication operation is responsible for defining two back-propagation rules, one for each of its input arguments. If we call the bprop method to request the gradient with respect to A given that the gradient on the output is G, then the bprop method of the matrix multiplication operation must state that the gradient with respect to A is given by GB. Likewise, if we call the bprop method to request the gradient with respect to B, then the matrix operation is responsible for implementing the bprop method and specifying that the desired gradient is given by A G. The back-propagation algorithm itself does not need to know any differentiation rules. It only needs to call each operation’s bprop rules with the right arguments. Formally, op bprop inputs . ( , , X G) must return  i (∇ Xop f inputs . ( )i) Gi, (6.54) which is just an implementation of the chain rule as expressed in equation . 6.47 Here, inputs is a list of inputs that are supplied to the operation, op.f is the mathematical function that the operation implements,X is the input whose gradient we wish to compute, and is the gradient on the output of the operation. G The op.bprop method should always pretend that all of its inputs are distinct from each other, even if they are not. For example, if the mul operator is passed two copies of x to compute x2, the op.bprop method should still return x as the derivative with respect to both inputs. The back-propagation algorithm will later add both of these arguments together to obtain 2x, which is the correct total derivative on . x Software implementations of back-propagation usually provide both the opera- tions and their bprop methods, so that users of deep learning software libraries are able to back-propagate through graphs built using common operations like matrix multiplication, exponents, logarithms, and so on. Software engineers who build a new implementation of back-propagation or advanced users who need to add their own operation to an existing library must usually derive the op.bprop method for any new operations manually. The back-propagation algorithm is formally described in algorithm . 6.5 216
  • 233. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Algorithm 6.5 The outermost skeleton of the back-propagation algorithm. This portion does simple setup and cleanup work. Most of the important work happens in the subroutine of algorithm build_grad 6.6 . Require: T, the target set of variables whose gradients must be computed. Require: G, the computational graph Require: z, the variable to be differentiated Let G be G pruned to contain only nodes that are ancestors of z and descendents of nodes in . T Initialize , a data structure associating tensors to their gradients grad_table grad table _ [ ] 1 z ← for do V in T build grad _ (V, , G G , grad table _ ) end for Return restricted to grad_table T In section , we explained that back-propagation was developed in order to 6.5.2 avoid computing the same subexpression in the chain rule multiple times. The naive algorithm could have exponential runtime due to these repeated subexpressions. Now that we have specified the back-propagation algorithm, we can understand its computational cost. If we assume that each operation evaluation has roughly the same cost, then we may analyze the computational cost in terms of the number of operations executed. Keep in mind here that we refer to an operation as the fundamental unit of our computational graph, which might actually consist of very many arithmetic operations (for example, we might have a graph that treats matrix multiplication as a single operation). Computing a gradient in a graph with n nodes will never execute more than O(n2) operations or store the output of more than O(n2) operations. Here we are counting operations in the computational graph, not individual operations executed by the underlying hardware, so it is important to remember that the runtime of each operation may be highly variable. For example, multiplying two matrices that each contain millions of entries might correspond to a single operation in the graph. We can see that computing the gradient requires as most O(n2 ) operations because the forward propagation stage will at worst execute all n nodes in the original graph (depending on which values we want to compute, we may not need to execute the entire graph). The back-propagation algorithm adds one Jacobian-vector product, which should be expressed with O(1) nodes, per edge in the original graph. Because the computational graph is a directed acyclic graph it has at most O(n2 ) edges. For the kinds of graphs that are commonly used in practice, the situation is even better. Most neural network cost functions are 217
  • 234. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Algorithm 6.6 The inner loop subroutine build grad _ (V, , G G, grad table _ ) of the back-propagation algorithm, called by the back-propagation algorithm defined in algorithm . 6.5 Require: V, the variable whose gradient should be added to and . G grad_table Require: G, the graph to modify. Require: G , the restriction of to nodes that participate in the gradient. G Require: grad_table, a data structure mapping nodes to their gradients if then V is in grad_table Return _ grad table[ ] V end if i ← 1 for C V in _ get consumers( , G) do op get operation ← _ ( ) C D C ← build grad _ ( , , G G, grad table _ ) G( ) i ← G op bprop get inputs . ( _ (C, ) ) , , V D i i ← + 1 end for G ←  i G( ) i grad table _ [ ] = V G Insert and the operations creating it into G G Return G roughly chain-structured, causing back-propagation to have O(n) cost. This is far better than the naive approach, which might need to execute exponentially many nodes. This potentially exponential cost can be seen by expanding and rewriting the recursive chain rule (equation ) non-recursively: 6.49 ∂u( ) n ∂u( ) j =  path (u(π1),u(π 2),...,u(πt)), from π1= to j πt=n t  k=2 ∂u(πk) ∂u(πk−1 ) . (6.55) Since the number of paths from node j to node n can grow exponentially in the length of these paths, the number of terms in the above sum, which is the number of such paths, can grow exponentially with the depth of the forward propagation graph. This large cost would be incurred because the same computation for ∂u( ) i ∂u( ) j would be redone many times. To avoid such recomputation, we can think of back-propagation as a table-filling algorithm that takes advantage of storing intermediate results ∂u( ) n ∂u( ) i . Each node in the graph has a corresponding slot in a table to store the gradient for that node. By filling in these table entries in order, 218
  • 235. CHAPTER 6. DEEP FEEDFORWARD NETWORKS back-propagation avoids repeating many common subexpressions. This table-filling strategy is sometimes called . dynamic programming 6.5.7 Example: Back-Propagation for MLP Training As an example, we walk through the back-propagation algorithm as it is used to train a multilayer perceptron. Here we develop a very simple multilayer perception with a single hidden layer. To train this model, we will use minibatch stochastic gradient descent. The back-propagation algorithm is used to compute the gradient of the cost on a single minibatch. Specifically, we use a minibatch of examples from the training set formatted as a design matrix X and a vector of associated class labels y. The network computes a layer of hidden features H = max{0, XW(1)}. To simplify the presentation we do not use biases in this model. We assume that our graph language includes a relu operation that can compute max{0, Z} element- wise. The predictions of the unnormalized log probabilities over classes are then given by HW (2). We assume that our graph language includes a cross_entropy operation that computes the cross-entropy between the targets y and the probability distribution defined by these unnormalized log probabilities. The resulting cross- entropy defines the cost JMLE. Minimizing this cross-entropy performs maximum likelihood estimation of the classifier. However, to make this example more realistic, we also include a regularization term. The total cost J J = MLE + λ    i,j  W (1) i,j 2 +  i,j  W (2) i,j 2   (6.56) consists of the cross-entropy and a weight decay term with coefficient λ. The computational graph is illustrated in figure . 6.11 The computational graph for the gradient of this example is large enough that it would be tedious to draw or to read. This demonstrates one of the benefits of the back-propagation algorithm, which is that it can automatically generate gradients that would be straightforward but tedious for a software engineer to derive manually. We can roughly trace out the behavior of the back-propagation algorithm by looking at the forward propagation graph in figure . To train, we wish 6.11 to compute both ∇W (1) J and ∇W (2) J. There are two different paths leading backward from J to the weights: one through the cross-entropy cost, and one through the weight decay cost. The weight decay cost is relatively simple; it will always contribute 2λW( ) i to the gradient on W( ) i . 219
  • 236. CHAPTER 6. DEEP FEEDFORWARD NETWORKS X X W (1) W (1) U (1) U (1) matmul H H relu U (3) U (3) sqr u(4) u(4) sum λ λ u(7) u(7) W (2) W (2) U (2) U (2) matmul y y JMLE JMLE cross_entropy U (5) U (5) sqr u(6) u(6) sum u(8) u(8) J J + × + Figure 6.11: The computational graph used to compute the cost used to train our example of a single-layer MLP using the cross-entropy loss and weight decay. The other path through the cross-entropy cost is slightly more complicated. Let G be the gradient on the unnormalized log probabilities U(2) provided by the cross_entropy operation. The back-propagation algorithm now needs to explore two different branches. On the shorter branch, it adds H  G to the gradient on W(2) , using the back-propagation rule for the second argument to the matrix multiplication operation. The other branch corresponds to the longer chain descending further along the network. First, the back-propagation algorithm computes ∇HJ = GW(2) using the back-propagation rule for the first argument to the matrix multiplication operation. Next, the relu operation uses its back- propagation rule to zero out components of the gradient corresponding to entries of U(1) that were less than . Let the result be called 0 G . The last step of the back-propagation algorithm is to use the back-propagation rule for the second argument of the operation to add matmul XG to the gradient on W(1). After these gradients have been computed, it is the responsibility of the gradient descent algorithm, or another optimization algorithm, to use these gradients to update the parameters. For the MLP, the computational cost is dominated by the cost of matrix multiplication. During the forward propagation stage, we multiply by each weight 220
  • 237. CHAPTER 6. DEEP FEEDFORWARD NETWORKS matrix, resulting in O(w) multiply-adds, where w is the number of weights. During the backward propagation stage, we multiply by the transpose of each weight matrix, which has the same computational cost. The main memory cost of the algorithm is that we need to store the input to the nonlinearity of the hidden layer. This value is stored from the time it is computed until the backward pass has returned to the same point. The memory cost is thus O(mnh), where m is the number of examples in the minibatch and nh is the number of hidden units. 6.5.8 Complications Our description of the back-propagation algorithm here is simpler than the imple- mentations actually used in practice. As noted above, we have restricted the definition of an operation to be a function that returns a single tensor. Most software implementations need to support operations that can return more than one tensor. For example, if we wish to compute both the maximum value in a tensor and the index of that value, it is best to compute both in a single pass through memory, so it is most efficient to implement this procedure as a single operation with two outputs. We have not described how to control the memory consumption of back- propagation. Back-propagation often involves summation of many tensors together. In the naive approach, each of these tensors would be computed separately, then all of them would be added in a second step. The naive approach has an overly high memory bottleneck that can be avoided by maintaining a single buffer and adding each value to that buffer as it is computed. Real-world implementations of back-propagation also need to handle various data types, such as 32-bit floating point, 64-bit floating point, and integer values. The policy for handling each of these types takes special care to design. Some operations have undefined gradients, and it is important to track these cases and determine whether the gradient requested by the user is undefined. Various other technicalities make real-world differentiation more complicated. These technicalities are not insurmountable, and this chapter has described the key intellectual tools needed to compute derivatives, but it is important to be aware that many more subtleties exist. 6.5.9 Differentiation outside the Deep Learning Community The deep learning community has been somewhat isolated from the broader computer science community and has largely developed its own cultural attitudes 221
  • 238. CHAPTER 6. DEEP FEEDFORWARD NETWORKS concerning how to perform differentiation. More generally, the field of automatic differentiation is concerned with how to compute derivatives algorithmically. The back-propagation algorithm described here is only one approach to automatic differentiation. It is a special case of a broader class of techniques called reverse mode accumulation. Other approaches evaluate the subexpressions of the chain rule in different orders. In general, determining the order of evaluation that results in the lowest computational cost is a difficult problem. Finding the optimal sequence of operations to compute the gradient is NP-complete ( , ), Naumann 2008 in the sense that it may require simplifying algebraic expressions into their least expensive form. For example, suppose we have variables p1, p2, . . . , pn representing probabilities and variables z1, z2 , . . . , zn representing unnormalized log probabilities. Suppose we define qi = exp(zi)  i exp(zi) , (6.57) where we build the softmax function out of exponentiation, summation and division operations, and construct a cross-entropy loss J = −  i pi log qi. A human mathematician can observe that the derivative of J with respect to zi takes a very simple form: qi − pi. The back-propagation algorithm is not capable of simplifying the gradient this way, and will instead explicitly propagate gradients through all of the logarithm and exponentiation operations in the original graph. Some software libraries such as Theano ( , ; , ) are able to Bergstra et al. 2010 Bastien et al. 2012 perform some kinds of algebraic substitution to improve over the graph proposed by the pure back-propagation algorithm. When the forward graph G has a single output node and each partial derivative ∂u( ) i ∂u( ) j can be computed with a constant amount of computation, back-propagation guarantees that the number of computations for the gradient computation is of the same order as the number of computations for the forward computation: this can be seen in algorithm because each local partial derivative 6.2 ∂u( ) i ∂u( ) j needs to be computed only once along with an associated multiplication and addition for the recursive chain-rule formulation (equation ). The overall computation is 6.49 therefore O(# edges). However, it can potentially be reduced by simplifying the computational graph constructed by back-propagation, and this is an NP-complete task. Implementations such as Theano and TensorFlow use heuristics based on matching known simplification patterns in order to iteratively attempt to simplify the graph. We defined back-propagation only for the computation of a gradient of a scalar output but back-propagation can be extended to compute a Jacobian (either of k different scalar nodes in the graph, or of a tensor-valued node containing k values). A naive implementation may then need k times more computation: for 222
  • 239. CHAPTER 6. DEEP FEEDFORWARD NETWORKS each scalar internal node in the original forward graph, the naive implementation computes k gradients instead of a single gradient. When the number of outputs of the graph is larger than the number of inputs, it is sometimes preferable to use another form of automatic differentiation called forward mode accumulation. Forward mode computation has been proposed for obtaining real-time computation of gradients in recurrent networks, for example ( , ). This Williams and Zipser 1989 also avoids the need to store the values and gradients for the whole graph, trading off computational efficiency for memory. The relationship between forward mode and backward mode is analogous to the relationship between left-multiplying versus right-multiplying a sequence of matrices, such as ABCD, (6.58) where the matrices can be thought of as Jacobian matrices. For example, if D is a column vector while A has many rows, this corresponds to a graph with a single output and many inputs, and starting the multiplications from the end and going backwards only requires matrix-vector products. This corresponds to the backward mode. Instead, starting to multiply from the left would involve a series of matrix-matrix products, which makes the whole computation much more expensive. However, if A has fewer rows than D has columns, it is cheaper to run the multiplications left-to-right, corresponding to the forward mode. In many communities outside of machine learning, it is more common to im- plement differentiation software that acts directly on traditional programming language code, such as Python or C code, and automatically generates programs that differentiate functions written in these languages. In the deep learning com- munity, computational graphs are usually represented by explicit data structures created by specialized libraries. The specialized approach has the drawback of requiring the library developer to define the bprop methods for every operation and limiting the user of the library to only those operations that have been defined. However, the specialized approach also has the benefit of allowing customized back-propagation rules to be developed for each operation, allowing the developer to improve speed or stability in non-obvious ways that an automatic procedure would presumably be unable to replicate. Back-propagation is therefore not the only way or the optimal way of computing the gradient, but it is a very practical method that continues to serve the deep learning community very well. In the future, differentiation technology for deep networks may improve as deep learning practitioners become more aware of advances in the broader field of automatic differentiation. 223
  • 240. CHAPTER 6. DEEP FEEDFORWARD NETWORKS 6.5.10 Higher-Order Derivatives Some software frameworks support the use of higher-order derivatives. Among the deep learning software frameworks, this includes at least Theano and TensorFlow. These libraries use the same kind of data structure to describe the expressions for derivatives as they use to describe the original function being differentiated. This means that the symbolic differentiation machinery can be applied to derivatives. In the context of deep learning, it is rare to compute a single second derivative of a scalar function. Instead, we are usually interested in properties of the Hessian matrix. If we have a function f : Rn → R, then the Hessian matrix is of size n n × . In typical deep learning applications, n will be the number of parameters in the model, which could easily number in the billions. The entire Hessian matrix is thus infeasible to even represent. Instead of explicitly computing the Hessian, the typical deep learning approach is to use Krylov methods. Krylov methods are a set of iterative techniques for performing various operations like approximately inverting a matrix or finding approximations to its eigenvectors or eigenvalues, without using any operation other than matrix-vector products. In order to use Krylov methods on the Hessian, we only need to be able to compute the product between the Hessian matrix H and an arbitrary vector v. A straightforward technique ( , ) for doing so is to compute Christianson 1992 Hv = ∇x  (∇xf x ( )) v  . (6.59) Both of the gradient computations in this expression may be computed automati- cally by the appropriate software library. Note that the outer gradient expression takes the gradient of a function of the inner gradient expression. If v is itself a vector produced by a computational graph, it is important to specify that the automatic differentiation software should not differentiate through the graph that produced . v While computing the Hessian is usually not advisable, it is possible to do with Hessian vector products. One simply computes He( ) i for all i = 1, . . . , n, where e( ) i is the one-hot vector with e ( ) i i = 1 and all other entries equal to 0. 6.6 Historical Notes Feedforward networks can be seen as efficient nonlinear function approximators based on using gradient descent to minimize the error in a function approximation. 224
  • 241. CHAPTER 6. DEEP FEEDFORWARD NETWORKS From this point of view, the modern feedforward network is the culmination of centuries of progress on the general function approximation task. The chain rule that underlies the back-propagation algorithm was invented in the 17th century ( , ; , ). Calculus and algebra have Leibniz 1676 L’Hôpital 1696 long been used to solve optimization problems in closed form, but gradient descent was not introduced as a technique for iteratively approximating the solution to optimization problems until the 19th century (Cauchy 1847 , ). Beginning in the 1940s, these function approximation techniques were used to motivate machine learning models such as the perceptron. However, the earliest models were based on linear models. Critics including Marvin Minsky pointed out several of the flaws of the linear model family, such as its inability to learn the XOR function, which led to a backlash against the entire neural network approach. Learning nonlinear functions required the development of a multilayer per- ceptron and a means of computing the gradient through such a model. Efficient applications of the chain rule based on dynamic programming began to appear in the 1960s and 1970s, mostly for control applications ( , ; Kelley 1960 Bryson and Denham 1961 Dreyfus 1962 Bryson and Ho 1969 Dreyfus 1973 , ; , ; , ; , ) but also for sensitivity analysis ( , ). Linnainmaa 1976 Werbos 1981 ( ) proposed applying these techniques to training artificial neural networks. The idea was finally developed in practice after being independently rediscovered in different ways ( , ; LeCun 1985 Parker 1985 Rumelhart 1986a , ; et al., ). The book Parallel Distributed Pro- cessing presented the results of some of the first successful experiments with back-propagation in a chapter ( , ) that contributed greatly Rumelhart et al. 1986b to the popularization of back-propagation and initiated a very active period of research in multi-layer neural networks. However, the ideas put forward by the authors of that book and in particular by Rumelhart and Hinton go much beyond back-propagation. They include crucial ideas about the possible computational implementation of several central aspects of cognition and learning, which came under the name of “connectionism” because of the importance this school of thought places on the connections between neurons as the locus of learning and memory. In particular, these ideas include the notion of distributed representation (Hinton et al., ). 1986 Following the success of back-propagation, neural network research gained pop- ularity and reached a peak in the early 1990s. Afterwards, other machine learning techniques became more popular until the modern deep learning renaissance that began in 2006. The core ideas behind modern feedforward networks have not changed sub- stantially since the 1980s. The same back-propagation algorithm and the same 225
  • 242. CHAPTER 6. DEEP FEEDFORWARD NETWORKS approaches to gradient descent are still in use. Most of the improvement in neural network performance from 1986 to 2015 can be attributed to two factors. First, larger datasets have reduced the degree to which statistical generalization is a challenge for neural networks. Second, neural networks have become much larger, due to more powerful computers, and better software infrastructure. However, a small number of algorithmic changes have improved the performance of neural networks noticeably. One of these algorithmic changes was the replacement of mean squared error with the cross-entropy family of loss functions. Mean squared error was popular in the 1980s and 1990s, but was gradually replaced by cross-entropy losses and the principle of maximum likelihood as ideas spread between the statistics community and the machine learning community. The use of cross-entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously suffered from saturation and slow learning when using the mean squared error loss. The other major algorithmic change that has greatly improved the performance of feedforward networks was the replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units. Rectification using the max{0, z} function was introduced in early neural network models and dates back at least as far as the Cognitron and Neocognitron (Fukushima 1975 1980 , , ). These early models did not use rectified linear units, but instead applied rectification to nonlinear functions. Despite the early popularity of rectification, rectification was largely replaced by sigmoids in the 1980s, perhaps because sigmoids perform better when neural networks are very small. As of the early 2000s, rectified linear units were avoided due to a somewhat superstitious belief that activation functions with non-differentiable points must be avoided. This began to change in about 2009. Jarrett 2009 et al. ( ) observed that “using a rectifying nonlinearity is the single most important factor in improving the performance of a recognition system” among several different factors of neural network architecture design. For small datasets, ( ) observed that using rectifying non- Jarrett et al. 2009 linearities is even more important than learning the weights of the hidden layers. Random weights are sufficient to propagate useful information through a rectified linear network, allowing the classifier layer at the top to learn how to map different feature vectors to class identities. When more data is available, learning begins to extract enough useful knowledge to exceed the performance of randomly chosen parameters. ( ) Glorot et al. 2011a showed that learning is far easier in deep rectified linear networks than in deep networks that have curvature or two-sided saturation in their activation functions. 226
  • 243. CHAPTER 6. DEEP FEEDFORWARD NETWORKS Rectified linear units are also of historical interest because they show that neuroscience has continued to have an influence on the development of deep learning algorithms. ( ) motivate rectified linear units from Glorot et al. 2011a biological considerations. The half-rectifying nonlinearity was intended to capture these properties of biological neurons: 1) For some inputs, biological neurons are completely inactive. 2) For some inputs, a biological neuron’s output is proportional to its input. 3) Most of the time, biological neurons operate in the regime where they are inactive (i.e., they should have sparse activations). When the modern resurgence of deep learning began in 2006, feedforward networks continued to have a bad reputation. From about 2006-2012, it was widely believed that feedforward networks would not perform well unless they were assisted by other models, such as probabilistic models. Today, it is now known that with the right resources and engineering practices, feedforward networks perform very well. Today, gradient-based learning in feedforward networks is used as a tool to develop probabilistic models, such as the variational autoencoder and generative adversarial networks, described in chapter . Rather than being viewed as an unreliable 20 technology that must be supported by other techniques, gradient-based learning in feedforward networks has been viewed since 2012 as a powerful technology that may be applied to many other machine learning tasks. In 2006, the community used unsupervised learning to support supervised learning, and now, ironically, it is more common to use supervised learning to support unsupervised learning. Feedforward networks continue to have unfulfilled potential. In the future, we expect they will be applied to many more tasks, and that advances in optimization algorithms and model design will improve their performance even further. This chapter has primarily described the neural network family of models. In the subsequent chapters, we turn to how to use these models—how to regularize and train them. 227
  • 244. Chapter 7 Regularization for Deep Learning A central problem in machine learning is how to make an algorithm that will perform well not just on the training data, but also on new inputs. Many strategies used in machine learning are explicitly designed to reduce the test error, possibly at the expense of increased training error. These strategies are known collectively as regularization. As we will see there are a great many forms of regularization available to the deep learning practitioner. In fact, developing more effective regularization strategies has been one of the major research efforts in the field. Chapter introduced the basic concepts of generalization, underfitting, overfit- 5 ting, bias, variance and regularization. If you are not already familiar with these notions, please refer to that chapter before continuing with this one. In this chapter, we describe regularization in more detail, focusing on regular- ization strategies for deep models or models that may be used as building blocks to form deep models. Some sections of this chapter deal with standard concepts in machine learning. If you are already familiar with these concepts, feel free to skip the relevant sections. However, most of this chapter is concerned with the extension of these basic concepts to the particular case of neural networks. In section , we defined regularization as “any modification we make to 5.2.2 a learning algorithm that is intended to reduce its generalization error but not its training error.” There are many regularization strategies. Some put extra constraints on a machine learning model, such as adding restrictions on the parameter values. Some add extra terms in the objective function that can be thought of as corresponding to a soft constraint on the parameter values. If chosen carefully, these extra constraints and penalties can lead to improved performance 228
  • 245. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING on the test set. Sometimes these constraints and penalties are designed to encode specific kinds of prior knowledge. Other times, these constraints and penalties are designed to express a generic preference for a simpler model class in order to promote generalization. Sometimes penalties and constraints are necessary to make an underdetermined problem determined. Other forms of regularization, known as ensemble methods, combine multiple hypotheses that explain the training data. In the context of deep learning, most regularization strategies are based on regularizing estimators. Regularization of an estimator works by trading increased bias for reduced variance. An effective regularizer is one that makes a profitable trade, reducing variance significantly while not overly increasing the bias. When we discussed generalization and overfitting in chapter , we focused on three situations, 5 where the model family being trained either (1) excluded the true data generating process—corresponding to underfitting and inducing bias, or (2) matched the true data generating process, or (3) included the generating process but also many other possible generating processes—the overfitting regime where variance rather than bias dominates the estimation error. The goal of regularization is to take a model from the third regime into the second regime. In practice, an overly complex model family does not necessarily include the target function or the true data generating process, or even a close approximation of either. We almost never have access to the true data generating process so we can never know for sure if the model family being estimated includes the generating process or not. However, most applications of deep learning algorithms are to domains where the true data generating process is almost certainly outside the model family. Deep learning algorithms are typically applied to extremely complicated domains such as images, audio sequences and text, for which the true generation process essentially involves simulating the entire universe. To some extent, we are always trying to fit a square peg (the data generating process) into a round hole (our model family). What this means is that controlling the complexity of the model is not a simple matter of finding the model of the right size, with the right number of parameters. Instead, we might find—and indeed in practical deep learning scenarios, we almost always do find—that the best fitting model (in the sense of minimizing generalization error) is a large model that has been regularized appropriately. We now review several strategies for how to create such a large, deep, regularized model. 229
  • 246. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 7.1 Parameter Norm Penalties Regularization has been used for decades prior to the advent of deep learning. Linear models such as linear regression and logistic regression allow simple, straightforward, and effective regularization strategies. Many regularization approaches are based on limiting the capacity of models, such as neural networks, linear regression, or logistic regression, by adding a pa- rameter norm penalty Ω(θ) to the objective function J. We denote the regularized objective function by ˜ J: ˜ J , J , α ( ; θ X y) = ( ; θ X y) + Ω( ) θ (7.1) where α ∈ [0, ∞) is a hyperparameter that weights the relative contribution of the norm penalty term, , relative to the standard objective function Ω J. Setting α to 0 results in no regularization. Larger values of α correspond to more regularization. When our training algorithm minimizes the regularized objective function ˜ J it will decrease both the original objective J on the training data and some measure of the size of the parameters θ (or some subset of the parameters). Different choices for the parameter norm can result in different solutions being preferred. Ω In this section, we discuss the effects of the various norms when used as penalties on the model parameters. Before delving into the regularization behavior of different norms, we note that for neural networks, we typically choose to use a parameter norm penalty that Ω penalizes of the affine transformation at each layer and leaves only the weights the biases unregularized. The biases typically require less data to fit accurately than the weights. Each weight specifies how two variables interact. Fitting the weight well requires observing both variables in a variety of conditions. Each bias controls only a single variable. This means that we do not induce too much variance by leaving the biases unregularized. Also, regularizing the bias parameters can introduce a significant amount of underfitting. We therefore use the vector w to indicate all of the weights that should be affected by a norm penalty, while the vector θ denotes all of the parameters, including both w and the unregularized parameters. In the context of neural networks, it is sometimes desirable to use a separate penalty with a different α coefficient for each layer of the network. Because it can be expensive to search for the correct value of multiple hyperparameters, it is still reasonable to use the same weight decay at all layers just to reduce the size of search space. 230
  • 247. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 7.1.1 L2 Parameter Regularization We have already seen, in section , one of the simplest and most common kinds 5.2.2 of parameter norm penalty: the L2 parameter norm penalty commonly known as weight decay. This regularization strategy drives the weights closer to the origin1 by adding a regularization term Ω(θ) = 1 2  w 2 2 to the objective function. In other academic communities, L2 regularization is also known as ridge regression or Tikhonov regularization. We can gain some insight into the behavior of weight decay regularization by studying the gradient of the regularized objective function. To simplify the presentation, we assume no bias parameter, so θ is just w. Such a model has the following total objective function: ˜ J , ( ; w X y) = α 2 w w w X y + ( J ; , ), (7.2) with the corresponding parameter gradient ∇w ˜ J , α ( ; w X y) = w + ∇wJ , . ( ; w X y) (7.3) To take a single gradient step to update the weights, we perform this update: w w w ← −  α ( + ∇wJ , . ( ; w X y)) (7.4) Written another way, the update is: w w ← − (1 α) − ∇  wJ , . ( ; w X y) (7.5) We can see that the addition of the weight decay term has modified the learning rule to multiplicatively shrink the weight vector by a constant factor on each step, just before performing the usual gradient update. This describes what happens in a single step. But what happens over the entire course of training? We will further simplify the analysis by making a quadratic approximation to the objective function in the neighborhood of the value of the weights that obtains minimal unregularized training cost, w∗ = arg minw J(w). If the objective function is truly quadratic, as in the case of fitting a linear regression model with 1 More generally, we could regularize the parameters to be near any specific point in space and, surprisingly, still get a regularization effect, but better results will be obtained for a value closer to the true one, with zero being a default value that makes sense when we do not know if the correct value should be positive or negative. Since it is far more common to regularize the model parameters towards zero, we will focus on this special case in our exposition. 231
  • 248. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING mean squared error, then the approximation is perfect. The approximation ˆ J is given by ˆ J J ( ) = θ (w∗ ) + 1 2 (w w − ∗ ) H w w ( − ∗ ), (7.6) where H is the Hessian matrix of J with respect to w evaluated at w∗. There is no first-order term in this quadratic approximation, because w∗ is defined to be a minimum, where the gradient vanishes. Likewise, because w∗ is the location of a minimum of , we can conclude that is positive semidefinite. J H The minimum of ˆ J occurs where its gradient ∇w ˆ J( ) = ( w H w w − ∗ ) (7.7) is equal to . 0 To study the effect of weight decay, we modify equation by adding the 7.7 weight decay gradient. We can now solve for the minimum of the regularized version of Ĵ. We use the variable w̃ to represent the location of the minimum. αw̃ H + (w̃ w − ∗ ) = 0 (7.8) ( + ) H αI w̃ Hw = ∗ (7.9) w̃ H I = ( + α )−1 Hw∗ . (7.10) As α approaches 0, the regularized solution w̃ approaches w∗ . But what happens as α grows? Because H is real and symmetric, we can decompose it into a diagonal matrix Λ and an orthonormal basis of eigenvectors, Q, such that H Q Q = Λ  . Applying the decomposition to equation , we obtain: 7.10 w̃ Q Q = ( Λ  + ) αI −1 Q Q Λ  w∗ (7.11) =  Q I Q ( + Λ α )  −1 Q Q Λ  w∗ (7.12) = ( + ) Q Λ αI −1 ΛQ w∗ . (7.13) We see that the effect of weight decay is to rescale w∗ along the axes defined by the eigenvectors of H. Specifically, the component of w∗ that is aligned with the i-th eigenvector of H is rescaled by a factor of λi λi +α. (You may wish to review how this kind of scaling works, first explained in figure ). 2.3 Along the directions where the eigenvalues of H are relatively large, for example, where λi  α, the effect of regularization is relatively small. However, components with λi  α will be shrunk to have nearly zero magnitude. This effect is illustrated in figure . 7.1 232
  • 249. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING w1 w 2 w∗ w̃ Figure 7.1: An illustration of the effect ofL2 (or weight decay) regularization on the value of the optimal w. The solid ellipses represent contours of equal value of the unregularized objective. The dotted circles represent contours of equal value of theL2 regularizer. At the point w̃, these competing objectives reach an equilibrium. In the first dimension, the eigenvalue of the Hessian of J is small. The objective function does not increase much when moving horizontally away from w∗. Because the objective function does not express a strong preference along this direction, the regularizer has a strong effect on this axis. The regularizer pulls w1 close to zero. In the second dimension, the objective function is very sensitive to movements away from w∗ . The corresponding eigenvalue is large, indicating high curvature. As a result, weight decay affects the position ofw2 relatively little. Only directions along which the parameters contribute significantly to reducing the objective function are preserved relatively intact. In directions that do not contribute to reducing the objective function, a small eigenvalue of the Hessian tells us that movement in this direction will not significantly increase the gradient. Components of the weight vector corresponding to such unimportant directions are decayed away through the use of the regularization throughout training. So far we have discussed weight decay in terms of its effect on the optimization of an abstract, general, quadratic cost function. How do these effects relate to machine learning in particular? We can find out by studying linear regression, a model for which the true cost function is quadratic and therefore amenable to the same kind of analysis we have used so far. Applying the analysis again, we will be able to obtain a special case of the same results, but with the solution now phrased in terms of the training data. For linear regression, the cost function is 233
  • 250. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING the sum of squared errors: ( ) Xw y −  ( ) Xw y − . (7.14) When we add L2 regularization, the objective function changes to ( ) Xw y −  ( ) + Xw y − 1 2 αw w. (7.15) This changes the normal equations for the solution from w X = (  X)−1 X  y (7.16) to w X = (  X I + α )−1 X y. (7.17) The matrix XX in equation is proportional to the covariance matrix 7.16 1 m XX. Using L2 regularization replaces this matrix with  XX I + α −1 in equation . 7.17 The new matrix is the same as the original one, but with the addition of α to the diagonal. The diagonal entries of this matrix correspond to the variance of each input feature. We can see that L2 regularization causes the learning algorithm to “perceive” the input X as having higher variance, which makes it shrink the weights on features whose covariance with the output target is low compared to this added variance. 7.1.2 L1 Regularization While L2 weight decay is the most common form of weight decay, there are other ways to penalize the size of the model parameters. Another option is to use L1 regularization. Formally, L1 regularization on the model parameter is defined as: w Ω( ) = θ || || w 1 =  i |wi|, (7.18) that is, as the sum of absolute values of the individual parameters.2 We will now discuss the effect of L1 regularization on the simple linear regression model, with no bias parameter, that we studied in our analysis of L2 regularization. In particular, we are interested in delineating the differences between L1 and L2 forms 2 As with L2 regularization, we could regularize the parameters towards a value that is not zero, but instead towards some parameter value w( ) o . In that case the L 1 regularization would introduce the term Ω( ) = θ || − w w( ) o ||1 =  i |wi − w ( ) o i |. 234
  • 251. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING of regularization. As with L2 weight decay, L1 weight decay controls the strength of the regularization by scaling the penalty using a positive hyperparameter Ω α. Thus, the regularized objective function ˜ J , ( ; w X y) is given by J̃ , α ( ; w X y) = || || w 1 + ( ; ) J w X y , , (7.19) with the corresponding gradient (actually, sub-gradient): ∇w ˜ J , α ( ; w X y) = sign( ) + w ∇wJ , (X y w ; ) (7.20) where is simply the sign of applied element-wise. sign( ) w w By inspecting equation , we can see immediately that the effect of 7.20 L1 regularization is quite different from that of L2 regularization. Specifically, we can see that the regularization contribution to the gradient no longer scales linearly with each wi; instead it is a constant factor with a sign equal to sign(wi). One consequence of this form of the gradient is that we will not necessarily see clean algebraic solutions to quadratic approximations of J(X y , ;w) as we did for L2 regularization. Our simple linear model has a quadratic cost function that we can represent via its Taylor series. Alternately, we could imagine that this is a truncated Taylor series approximating the cost function of a more sophisticated model. The gradient in this setting is given by ∇w ˆ J( ) = ( w H w w − ∗ ), (7.21) where, again, is the Hessian matrix of with respect to evaluated at H J w w∗. Because the L1 penalty does not admit clean algebraic expressions in the case of a fully general Hessian, we will also make the further simplifying assumption that the Hessian is diagonal, H = diag([H1 1 , , . . . , Hn,n ]), where each Hi,i > 0. This assumption holds if the data for the linear regression problem has been preprocessed to remove all correlation between the input features, which may be accomplished using PCA. Our quadratic approximation of the L1 regularized objective function decom- poses into a sum over the parameters: Ĵ , J ( ; w X y) = (w∗ ; ) + X y ,  i  1 2 Hi,i(wi − w∗ i )2 + α w | i|  . (7.22) The problem of minimizing this approximate cost function has an analytical solution (for each dimension ), with the following form: i wi = sign(w∗ i ) max  |w∗ i | − α Hi,i , 0  . (7.23) 235
  • 252. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING Consider the situation where w∗ i > i 0 for all . There are two possible outcomes: 1. The case where w∗ i ≤ α Hi,i . Here the optimal value of wi under the regularized objective is simply wi = 0. This occurs because the contribution of J(w;X y , ) to the regularized objective ˜ J(w; X y , ) is overwhelmed—in direction i—by the L1 regularization which pushes the value of wi to zero. 2. The case where w∗ i > α Hi,i . In this case, the regularization does not move the optimal value of wi to zero but instead it just shifts it in that direction by a distance equal to α Hi,i . A similar process happens when w∗ i < 0, but with the L1 penalty making wi less negative by α Hi,i , or 0. In comparison to L2 regularization, L1 regularization results in a solution that is more sparse. Sparsity in this context refers to the fact that some parameters have an optimal value of zero. The sparsity of L1 regularization is a qualitatively different behavior than arises with L2 regularization. Equation gave the 7.13 solution w̃ for L2 regularization. If we revisit that equation using the assumption of a diagonal and positive definite Hessian H that we introduced for our analysis of L1 regularization, we find that w̃i = Hi,i Hi,i+αw∗ i . If w∗ i was nonzero, then w̃i remains nonzero. This demonstrates that L2 regularization does not cause the parameters to become sparse, while L1 regularization may do so for large enough . α The sparsity property induced by L1 regularization has been used extensively as a feature selection mechanism. Feature selection simplifies a machine learning problem by choosing which subset of the available features should be used. In particular, the well known LASSO ( , ) (least absolute shrinkage and Tibshirani 1995 selection operator) model integrates an L1 penalty with a linear model and a least squares cost function. The L1 penalty causes a subset of the weights to become zero, suggesting that the corresponding features may safely be discarded. In section , we saw that many regularization strategies can be interpreted 5.6.1 as MAP Bayesian inference, and that in particular, L2 regularization is equivalent to MAP Bayesian inference with a Gaussian prior on the weights. For L1 regu- larization, the penalty αΩ(w) = α  i |wi| used to regularize a cost function is equivalent to the log-prior term that is maximized by MAP Bayesian inference when the prior is an isotropic Laplace distribution (equation ) over 3.26 w ∈ Rn : log ( ) = p w  i log Laplace(wi; 0, 1 α ) = − || || α w 1 + log log 2 n α n − . (7.24) 236
  • 253. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING From the point of view of learning via maximization with respect to w, we can ignore the terms because they do not depend on . log log 2 α − w 7.2 Norm Penalties as Constrained Optimization Consider the cost function regularized by a parameter norm penalty: ˜ J , J , α . ( ; θ X y) = ( ; θ X y) + Ω( ) θ (7.25) Recall from section that we can minimize a function subject to constraints 4.4 by constructing a generalized Lagrange function, consisting of the original objective function plus a set of penalties. Each penalty is a product between a coefficient, called a Karush–Kuhn–Tucker (KKT) multiplier, and a function representing whether the constraint is satisfied. If we wanted to constrain Ω(θ) to be less than some constant , we could construct a generalized Lagrange function k L − ( ; ) = ( ; ) + (Ω( ) θ, α X y , J θ X y , α θ k . ) (7.26) The solution to the constrained problem is given by θ∗ = arg min θ max α,α≥0 L( ) θ, α . (7.27) As described in section , solving this problem requires modifying both 4.4 θ and α. Section provides a worked example of linear regression with an 4.5 L2 constraint. Many different procedures are possible—some may use gradient descent, while others may use analytical solutions for where the gradient is zero—but in all procedures α must increase whenever Ω(θ) > k and decrease whenever Ω(θ) < k. All positive α encourage Ω(θ) to shrink. The optimal value α∗ will encourage Ω(θ) to shrink, but not so strongly to make become less than . Ω( ) θ k To gain some insight into the effect of the constraint, we can fix α∗ and view the problem as just a function of : θ θ∗ = arg min θ L(θ, α∗ ) = arg min θ J , α ( ; θ X y) + ∗ Ω( ) θ . (7.28) This is exactly the same as the regularized training problem of minimizing ˜ J. We can thus think of a parameter norm penalty as imposing a constraint on the weights. If is the Ω L2 norm, then the weights are constrained to lie in an L2 ball. If is the Ω L1 norm, then the weights are constrained to lie in a region of 237
  • 254. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING limited L1 norm. Usually we do not know the size of the constraint region that we impose by using weight decay with coefficient α∗ because the value of α∗ does not directly tell us the value of k. In principle, one can solve for k, but the relationship between k and α∗ depends on the form of J. While we do not know the exact size of the constraint region, we can control it roughly by increasing or decreasing α in order to grow or shrink the constraint region. Larger α will result in a smaller constraint region. Smaller will result in a larger constraint region. α Sometimes we may wish to use explicit constraints rather than penalties. As described in section , we can modify algorithms such as stochastic gradient 4.4 descent to take a step downhill on J (θ) and then project θ back to the nearest point that satisfies Ω(θ) < k. This can be useful if we have an idea of what value of k is appropriate and do not want to spend time searching for the value of α that corresponds to this . k Another reason to use explicit constraints and reprojection rather than enforcing constraints with penalties is that penalties can cause non-convex optimization procedures to get stuck in local minima corresponding to small θ. When training neural networks, this usually manifests as neural networks that train with several “dead units.” These are units that do not contribute much to the behavior of the function learned by the network because the weights going into or out of them are all very small. When training with a penalty on the norm of the weights, these configurations can be locally optimal, even if it is possible to significantly reduce J by making the weights larger. Explicit constraints implemented by re-projection can work much better in these cases because they do not encourage the weights to approach the origin. Explicit constraints implemented by re-projection only have an effect when the weights become large and attempt to leave the constraint region. Finally, explicit constraints with reprojection can be useful because they impose some stability on the optimization procedure. When using high learning rates, it is possible to encounter a positive feedback loop in which large weights induce large gradients which then induce a large update to the weights. If these updates consistently increase the size of the weights, then θ rapidly moves away from the origin until numerical overflow occurs. Explicit constraints with reprojection prevent this feedback loop from continuing to increase the magnitude of the weights without bound. ( ) recommend using constraints combined with Hinton et al. 2012c a high learning rate to allow rapid exploration of parameter space while maintaining some stability. In particular, Hinton 2012c et al. ( ) recommend a strategy introduced by Srebro and Shraibman 2005 ( ): constraining the norm of each column of the weight matrix 238
  • 255. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING of a neural net layer, rather than constraining the Frobenius norm of the entire weight matrix. Constraining the norm of each column separately prevents any one hidden unit from having very large weights. If we converted this constraint into a penalty in a Lagrange function, it would be similar to L2 weight decay but with a separate KKT multiplier for the weights of each hidden unit. Each of these KKT multipliers would be dynamically updated separately to make each hidden unit obey the constraint. In practice, column norm limitation is always implemented as an explicit constraint with reprojection. 7.3 Regularization and Under-Constrained Problems In some cases, regularization is necessary for machine learning problems to be prop- erly defined. Many linear models in machine learning, including linear regression and PCA, depend on inverting the matrix X  X. This is not possible whenever X X is singular. This matrix can be singular whenever the data generating distri- bution truly has no variance in some direction, or when no variance is observed in some direction because there are fewer examples (rows of X) than input features (columns of X ). In this case, many forms of regularization correspond to inverting X X I + α instead. This regularized matrix is guaranteed to be invertible. These linear problems have closed form solutions when the relevant matrix is invertible. It is also possible for a problem with no closed form solution to be underdetermined. An example is logistic regression applied to a problem where the classes are linearly separable. If a weight vector w is able to achieve perfect classification, then 2w will also achieve perfect classification and higher likelihood. An iterative optimization procedure like stochastic gradient descent will continually increase the magnitude of w and, in theory, will never halt. In practice, a numerical implementation of gradient descent will eventually reach sufficiently large weights to cause numerical overflow, at which point its behavior will depend on how the programmer has decided to handle values that are not real numbers. Most forms of regularization are able to guarantee the convergence of iterative methods applied to underdetermined problems. For example, weight decay will cause gradient descent to quit increasing the magnitude of the weights when the slope of the likelihood is equal to the weight decay coefficient. The idea of using regularization to solve underdetermined problems extends beyond machine learning. The same idea is useful for several basic linear algebra problems. As we saw in section , we can solve underdetermined linear equations using 2.9 239
  • 256. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING the Moore-Penrose pseudoinverse. Recall that one definition of the pseudoinverse X+ of a matrix is X X+ = lim α0 (X X I + α )−1 X . (7.29) We can now recognize equation as performing linear regression with weight 7.29 decay. Specifically, equation is the limit of equation as the regularization 7.29 7.17 coefficient shrinks to zero. We can thus interpret the pseudoinverse as stabilizing underdetermined problems using regularization. 7.4 Dataset Augmentation The best way to make a machine learning model generalize better is to train it on more data. Of course, in practice, the amount of data we have is limited. One way to get around this problem is to create fake data and add it to the training set. For some machine learning tasks, it is reasonably straightforward to create new fake data. This approach is easiest for classification. A classifier needs to take a compli- cated, high dimensional input x and summarize it with a single category identity y. This means that the main task facing a classifier is to be invariant to a wide variety of transformations. We can generate new (x, y) pairs easily just by transforming the inputs in our training set. x This approach is not as readily applicable to many other tasks. For example, it is difficult to generate new fake data for a density estimation task unless we have already solved the density estimation problem. Dataset augmentation has been a particularly effective technique for a specific classification problem: object recognition. Images are high dimensional and include an enormous variety of factors of variation, many of which can be easily simulated. Operations like translating the training images a few pixels in each direction can often greatly improve generalization, even if the model has already been designed to be partially translation invariant by using the convolution and pooling techniques described in chapter . Many other operations such as rotating the image or scaling 9 the image have also proven quite effective. One must be careful not to apply transformations that would change the correct class. For example, optical character recognition tasks require recognizing the difference between ‘b’ and ‘d’ and the difference between ‘6’ and ‘9’, so horizontal flips and 180◦ rotations are not appropriate ways of augmenting datasets for these tasks. 240
  • 257. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING There are also transformations that we would like our classifiers to be invariant to, but which are not easy to perform. For example, out-of-plane rotation can not be implemented as a simple geometric operation on the input pixels. Dataset augmentation is effective for speech recognition tasks as well (Jaitly and Hinton 2013 , ). Injecting noise in the input to a neural network (Sietsma and Dow 1991 , ) can also be seen as a form of data augmentation. For many classification and even some regression tasks, the task should still be possible to solve even if small random noise is added to the input. Neural networks prove not to be very robust to noise, however (Tang and Eliasmith 2010 , ). One way to improve the robustness of neural networks is simply to train them with random noise applied to their inputs. Input noise injection is part of some unsupervised learning algorithms such as the denoising autoencoder (Vincent 2008 et al., ). Noise injection also works when the noise is applied to the hidden units, which can be seen as doing dataset augmentation at multiple levels of abstraction. Poole 2014 et al. ( ) recently showed that this approach can be highly effective provided that the magnitude of the noise is carefully tuned. Dropout, a powerful regularization strategy that will be described in section , can be seen as a process of constructing new inputs by 7.12 multiplying by noise. When comparing machine learning benchmark results, it is important to take the effect of dataset augmentation into account. Often, hand-designed dataset augmentation schemes can dramatically reduce the generalization error of a machine learning technique. To compare the performance of one machine learning algorithm to another, it is necessary to perform controlled experiments. When comparing machine learning algorithm A and machine learning algorithm B, it is necessary to make sure that both algorithms were evaluated using the same hand-designed dataset augmentation schemes. Suppose that algorithm A performs poorly with no dataset augmentation and algorithm B performs well when combined with numerous synthetic transformations of the input. In such a case it is likely the synthetic transformations caused the improved performance, rather than the use of machine learning algorithm B. Sometimes deciding whether an experiment has been properly controlled requires subjective judgment. For example, machine learning algorithms that inject noise into the input are performing a form of dataset augmentation. Usually, operations that are generally applicable (such as adding Gaussian noise to the input) are considered part of the machine learning algorithm, while operations that are specific to one application domain (such as randomly cropping an image) are considered to be separate pre-processing steps. 241
  • 258. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 7.5 Noise Robustness Section has motivated the use of noise applied to the inputs as a dataset 7.4 augmentation strategy. For some models, the addition of noise with infinitesimal variance at the input of the model is equivalent to imposing a penalty on the norm of the weights ( , , ). In the general case, it is important to Bishop 1995a b remember that noise injection can be much more powerful than simply shrinking the parameters, especially when the noise is added to the hidden units. Noise applied to the hidden units is such an important topic that it merit its own separate discussion; the dropout algorithm described in section is the main development 7.12 of that approach. Another way that noise has been used in the service of regularizing models is by adding it to the weights. This technique has been used primarily in the context of recurrent neural networks ( , ; Jim et al. 1996 Graves 2011 , ). This can be interpreted as a stochastic implementation of Bayesian inference over the weights. The Bayesian treatment of learning would consider the model weights to be uncertain and representable via a probability distribution that reflects this uncertainty. Adding noise to the weights is a practical, stochastic way to reflect this uncertainty. Noise applied to the weights can also be interpreted as equivalent (under some assumptions) to a more traditional form of regularization, encouraging stability of the function to be learned. Consider the regression setting, where we wish to train a function ŷ(x) that maps a set of features x to a scalar using the least-squares cost function between the model predictions ŷ( ) x and the true values : y J = Ep x,y ( )  (ŷ y ( ) x − )2  . (7.30) The training set consists of labeled examples m {(x(1), y(1)) ( , . . . , x( ) m , y( ) m )}. We now assume that with each input presentation we also include a random perturbation W ∼ N(; 0, ηI) of the network weights. Let us imagine that we have a standard l-layer MLP. We denote the perturbed model as ŷ W (x). Despite the injection of noise, we are still interested in minimizing the squared error of the output of the network. The objective function thus becomes: ˜ JW = Ep ,y, (x W )  (ŷW ( ) ) x − y 2  (7.31) = Ep ,y, (x W )  ŷ2 W ( ) 2 ˆ x − yyW ( ) + x y2 . (7.32) For small η, the minimization of J with added weight noise (with covariance ηI) is equivalent to minimization of J with an additional regularization term: 242
  • 259. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING ηEp ,y (x )  ∇W ŷ( ) x 2  . This form of regularization encourages the parameters to go to regions of parameter space where small perturbations of the weights have a relatively small influence on the output. In other words, it pushes the model into regions where the model is relatively insensitive to small variations in the weights, finding points that are not merely minima, but minima surrounded by flat regions (Hochreiter and Schmidhuber 1995 , ). In the simplified case of linear regression (where, for instance, ŷ(x) = wx+ b), this regularization term collapses into ηEp( ) x    x 2  , which is not a function of parameters and therefore does not contribute to the gradient of ˜ JW with respect to the model parameters. 7.5.1 Injecting Noise at the Output Targets Most datasets have some amount of mistakes in the y labels. It can be harmful to maximize log p(y | x) when y is a mistake. One way to prevent this is to explicitly model the noise on the labels. For example, we can assume that for some small constant , the training set label y is correct with probability 1− , and otherwise any of the other possible labels might be correct. This assumption is easy to incorporate into the cost function analytically, rather than by explicitly drawing noise samples. For example, label smoothing regularizes a model based on a softmax with k output values by replacing the hard and classification targets 0 1 with targets of  k−1 and 1− , respectively. The standard cross-entropy loss may then be used with these soft targets. Maximum likelihood learning with a softmax classifier and hard targets may actually never converge—the softmax can never predict a probability of exactly or exactly , so it will continue to learn larger 0 1 and larger weights, making more extreme predictions forever. It is possible to prevent this scenario using other regularization strategies like weight decay. Label smoothing has the advantage of preventing the pursuit of hard probabilities without discouraging correct classification. This strategy has been used since the 1980s and continues to be featured prominently in modern neural networks (Szegedy et al., ). 2015 7.6 Semi-Supervised Learning In the paradigm of semi-supervised learning, both unlabeled examples from P(x) and labeled examples from P (x y , ) are used to estimate P (y x | ) or predict y from x. In the context of deep learning, semi-supervised learning usually refers to learning a representation h = f (x). The goal is to learn a representation so 243
  • 260. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING that examples from the same class have similar representations. Unsupervised learning can provide useful cues for how to group examples in representation space. Examples that cluster tightly in the input space should be mapped to similar representations. A linear classifier in the new space may achieve better generalization in many cases (Belkin and Niyogi 2002 Chapelle 2003 , ; et al., ). A long-standing variant of this approach is the application of principal components analysis as a pre-processing step before applying a classifier (on the projected data). Instead of having separate unsupervised and supervised components in the model, one can construct models in which a generative model of either P (x) or P(x y , ) shares parameters with a discriminative model of P(y x | ). One can then trade-off the supervised criterion − log P(y x | ) with the unsupervised or generative one (such as − log P (x) or − log P(x y , )). The generative criterion then expresses a particular form of prior belief about the solution to the supervised learning problem ( , ), namely that the structure of Lasserre et al. 2006 P(x) is connected to the structure of P(y x | ) in a way that is captured by the shared parametrization. By controlling how much of the generative criterion is included in the total criterion, one can find a better trade-off than with a purely generative or a purely discriminative training criterion ( , ; Lasserre et al. 2006 Larochelle and Bengio 2008 , ). Salakhutdinov and Hinton 2008 ( ) describe a method for learning the kernel function of a kernel machine used for regression, in which the usage of unlabeled examples for modeling improves quite significantly. P ( ) x P ( ) y x | See ( ) for more information about semi-supervised learning. Chapelle et al. 2006 7.7 Multi-Task Learning Multi-task learning ( , ) is a way to improve generalization by pooling Caruana 1993 the examples (which can be seen as soft constraints imposed on the parameters) arising out of several tasks. In the same way that additional training examples put more pressure on the parameters of the model towards values that generalize well, when part of a model is shared across tasks, that part of the model is more constrained towards good values (assuming the sharing is justified), often yielding better generalization. Figure illustrates a very common form of multi-task learning, in which 7.2 different supervised tasks (predicting y( ) i given x) share the same input x, as well as some intermediate-level representation h(shared) capturing a common pool of 244
  • 261. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING factors. The model can generally be divided into two kinds of parts and associated parameters: 1. Task-specific parameters (which only benefit from the examples of their task to achieve good generalization). These are the upper layers of the neural network in figure . 7.2 2. Generic parameters, shared across all the tasks (which benefit from the pooled data of all the tasks). These are the lower layers of the neural network in figure . 7.2 h(1) h(1) h(2) h(2) h(3) h(3) y(1) y(1) y(2) y(2) h(shared) h(shared) x x Figure 7.2: Multi-task learning can be cast in several ways in deep learning frameworks and this figure illustrates the common situation where the tasks share a common input but involve different target random variables. The lower layers of a deep network (whether it is supervised and feedforward or includes a generative component with downward arrows) can be shared across such tasks, while task-specific parameters (associated respectively with the weights into and from h(1) and h(2)) can be learned on top of those yielding a shared representation h(shared). The underlying assumption is that there exists a common pool of factors that explain the variations in the inputx, while each task is associated with a subset of these factors. In this example, it is additionally assumed that top-level hidden units h(1) and h(2) are specialized to each task (respectively predicting y(1) and y(2)) while some intermediate-level representationh(shared) is shared across all tasks. In the unsupervised learning context, it makes sense for some of the top-level factors to be associated with none of the output tasks (h(3) ): these are the factors that explain some of the input variations but are not relevant for predicting y(1) or y(2) . Improved generalization and generalization error bounds ( , ) can be Baxter 1995 achieved because of the shared parameters, for which statistical strength can be 245
  • 262. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 0 50 100 150 200 250 Time (epochs) 0 00 . 0 05 . 0 10 . 0 15 . 0 20 . Loss (negative log-likelihood) Training set loss Validation set loss Figure 7.3: Learning curves showing how the negative log-likelihood loss changes over time (indicated as number of training iterations over the dataset, or epochs). In this example, we train a maxout network on MNIST. Observe that the training objective decreases consistently over time, but the validation set average loss eventually begins to increase again, forming an asymmetric U-shaped curve. greatly improved (in proportion with the increased number of examples for the shared parameters, compared to the scenario of single-task models). Of course this will happen only if some assumptions about the statistical relationship between the different tasks are valid, meaning that there is something shared across some of the tasks. From the point of view of deep learning, the underlying prior belief is the following: among the factors that explain the variations observed in the data associated with the different tasks, some are shared across two or more tasks. 7.8 Early Stopping When training large models with sufficient representational capacity to overfit the task, we often observe that training error decreases steadily over time, but validation set error begins to rise again. See figure for an example of this 7.3 behavior. This behavior occurs very reliably. This means we can obtain a model with better validation set error (and thus, hopefully better test set error) by returning to the parameter setting at the point in time with the lowest validation set error. Every time the error on the validation set improves, we store a copy of the model parameters. When the training algorithm terminates, we return these parameters, rather than the latest parameters. The 246
  • 263. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING algorithm terminates when no parameters have improved over the best recorded validation error for some pre-specified number of iterations. This procedure is specified more formally in algorithm . 7.1 Algorithm 7.1 The early stopping meta-algorithm for determining the best amount of time to train. This meta-algorithm is a general strategy that works well with a variety of training algorithms and ways of quantifying error on the validation set. Let be the number of steps between evaluations. n Let p be the “patience,” the number of times to observe worsening validation set error before giving up. Let θo be the initial parameters. θ θ ← o i ← 0 j ← 0 v ← ∞ θ∗ ← θ i∗ ← i while do j < p Update by running the training algorithm for steps. θ n i i n ← + v ← ValidationSetError( ) θ if v < v then j ← 0 θ∗ ← θ i∗ ← i v v ←  else j j ← + 1 end if end while Best parameters are θ∗ , best number of training steps is i∗ This strategy is known as early stopping. It is probably the most commonly used form of regularization in deep learning. Its popularity is due both to its effectiveness and its simplicity. One way to think of early stopping is as a very efficient hyperparameter selection algorithm. In this view, the number of training steps is just another hyperparameter. We can see in figure that this hyperparameter has a U-shaped validation set 7.3 247
  • 264. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING performance curve. Most hyperparameters that control model capacity have such a U-shaped validation set performance curve, as illustrated in figure . In the case of 5.3 early stopping, we are controlling the effective capacity of the model by determining how many steps it can take to fit the training set. Most hyperparameters must be chosen using an expensive guess and check process, where we set a hyperparameter at the start of training, then run training for several steps to see its effect. The “training time” hyperparameter is unique in that by definition a single run of training tries out many values of the hyperparameter. The only significant cost to choosing this hyperparameter automatically via early stopping is running the validation set evaluation periodically during training. Ideally, this is done in parallel to the training process on a separate machine, separate CPU, or separate GPU from the main training process. If such resources are not available, then the cost of these periodic evaluations may be reduced by using a validation set that is small compared to the training set or by evaluating the validation set error less frequently and obtaining a lower resolution estimate of the optimal training time. An additional cost to early stopping is the need to maintain a copy of the best parameters. This cost is generally negligible, because it is acceptable to store these parameters in a slower and larger form of memory (for example, training in GPU memory, but storing the optimal parameters in host memory or on a disk drive). Since the best parameters are written to infrequently and never read during training, these occasional slow writes have little effect on the total training time. Early stopping is a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure, the objective function, or the set of allowable parameter values. This means that it is easy to use early stopping without damaging the learning dynamics. This is in contrast to weight decay, where one must be careful not to use too much weight decay and trap the network in a bad local minimum corresponding to a solution with pathologically small weights. Early stopping may be used either alone or in conjunction with other regulariza- tion strategies. Even when using regularization strategies that modify the objective function to encourage better generalization, it is rare for the best generalization to occur at a local minimum of the training objective. Early stopping requires a validation set, which means some training data is not fed to the model. To best exploit this extra data, one can perform extra training after the initial training with early stopping has completed. In the second, extra training step, all of the training data is included. There are two basic strategies one can use for this second training procedure. One strategy (algorithm ) is to initialize the model again and retrain on all 7.2 248
  • 265. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING of the data. In this second training pass, we train for the same number of steps as the early stopping procedure determined was optimal in the first pass. There are some subtleties associated with this procedure. For example, there is not a good way of knowing whether to retrain for the same number of parameter updates or the same number of passes through the dataset. On the second round of training, each pass through the dataset will require more parameter updates because the training set is bigger. Algorithm 7.2 A meta-algorithm for using early stopping to determine how long to train, then retraining on all the data. Let X( ) train and y( ) train be the training set. Split X( ) train and y( ) train into (X( ) subtrain , X (valid)) ( and y( ) subtrain , y(valid) ) respectively. Run early stopping (algorithm ) starting from random 7.1 θ using X( ) subtrain and y( ) subtrain for training data and X(valid) and y(valid) for validation data. This returns i∗ , the optimal number of steps. Set to random values again. θ Train on X( ) train and y( ) train for i∗ steps. Another strategy for using all of the data is to keep the parameters obtained from the first round of training and then continue training but now using all of the data. At this stage, we now no longer have a guide for when to stop in terms of a number of steps. Instead, we can monitor the average loss function on the validation set, and continue training until it falls below the value of the training set objective at which the early stopping procedure halted. This strategy avoids the high cost of retraining the model from scratch, but is not as well-behaved. For example, there is not any guarantee that the objective on the validation set will ever reach the target value, so this strategy is not even guaranteed to terminate. This procedure is presented more formally in algorithm . 7.3 Early stopping is also useful because it reduces the computational cost of the training procedure. Besides the obvious reduction in cost due to limiting the number of training iterations, it also has the benefit of providing regularization without requiring the addition of penalty terms to the cost function or the computation of the gradients of such additional terms. How early stopping acts as a regularizer: So far we have stated that early stopping a regularization strategy, but we have supported this claim only by is showing learning curves where the validation set error has a U-shaped curve. What 249
  • 266. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING Algorithm 7.3 Meta-algorithm using early stopping to determine at what objec- tive value we start to overfit, then continue training until that value is reached. Let X( ) train and y( ) train be the training set. Split X( ) train and y( ) train into (X( ) subtrain , X (valid)) ( and y( ) subtrain , y(valid)) respectively. Run early stopping (algorithm ) starting from random 7.1 θ using X( ) subtrain and y( ) subtrain for training data and X(valid) and y(valid) for validation data. This updates . θ  J , ← (θ X( ) subtrain , y( ) subtrain ) while J , (θ X(valid), y(valid)) >  do Train on X( ) train and y( ) train for steps. n end while is the actual mechanism by which early stopping regularizes the model? Bishop ( ) and ( ) argued that early stopping has the effect of 1995a Sjöberg and Ljung 1995 restricting the optimization procedure to a relatively small volume of parameter space in the neighborhood of the initial parameter value θo, as illustrated in figure . More specifically, imagine taking 7.4 τ optimization steps (corresponding to τ training iterations) and with learning rate . We can view the product τ as a measure of effective capacity. Assuming the gradient is bounded, restricting both the number of iterations and the learning rate limits the volume of parameter space reachable from θo. In this sense, τ behaves as if it were the reciprocal of the coefficient used for weight decay. Indeed, we can show how—in the case of a simple linear model with a quadratic error function and simple gradient descent—early stopping is equivalent to L2 regularization. In order to compare with classical L2 regularization, we examine a simple setting where the only parameters are linear weights (θ = w). We can model the cost function J with a quadratic approximation in the neighborhood of the empirically optimal value of the weights w∗: ˆ J J ( ) = θ (w∗ ) + 1 2 (w w − ∗ ) H w w ( − ∗ ), (7.33) where H is the Hessian matrix of J with respect to w evaluated at w∗ . Given the assumption that w∗ is a minimum of J(w), we know that H is positive semidefinite. Under a local Taylor series approximation, the gradient is given by: ∇w ˆ J( ) = ( w H w w − ∗ ). (7.34) 250
  • 267. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING w1 w 2 w∗ w̃ w1 w 2 w∗ w̃ Figure 7.4: An illustration of the effect of early stopping. (Left)The solid contour lines indicate the contours of the negative log-likelihood. The dashed line indicates the trajectory taken by SGD beginning from the origin. Rather than stopping at the point w∗ that minimizes the cost, early stopping results in the trajectory stopping at an earlier pointw̃. (Right)An illustration of the effect of L2 regularization for comparison. The dashed circles indicate the contours of the L2 penalty, which causes the minimum of the total cost to lie nearer the origin than the minimum of the unregularized cost. We are going to study the trajectory followed by the parameter vector during training. For simplicity, let us set the initial parameter vector to the origin,3 that is w(0) = 0. Let us study the approximate behavior of gradient descent on J by analyzing gradient descent on ˆ J: w( ) τ = w( 1) τ− − ∇  w ˆ J(w( 1) τ− ) (7.35) = w( 1) τ− − H w ( ( 1) τ− − w ∗ ) (7.36) w( ) τ − w∗ = ( )( I H −  w( 1) τ− − w∗ ). (7.37) Let us now rewrite this expression in the space of the eigenvectors ofH , exploiting the eigendecomposition of H: H = Q Q Λ , where Λ is a diagonal matrix and Q is an orthonormal basis of eigenvectors. w( ) τ − w∗ = (I Q Q −  Λ  )(w( 1) τ− − w∗ ) (7.38) Q (w( ) τ − w∗ ) = ( ) I − Λ Q (w( 1) τ− − w∗ ) (7.39) 3 For neural networks, to obtain symmetry breaking between hidden units, we cannot initialize all the parameters to 0, as discussed in section . However, the argument holds for any other 6.2 initial value w(0) . 251
  • 268. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING Assuming that w(0) = 0 and that  is chosen to be small enough to guarantee |1− λi| < 1, the parameter trajectory during training after τ parameter updates is as follows: Q w( ) τ = [ ( ) I − I − Λ τ ]Q w∗ . (7.40) Now, the expression for Q w̃ in equation for 7.13 L2 regularization can be rear- ranged as: Q w̃ I = ( + Λ α )−1 ΛQ  w∗ (7.41) Q w̃ I I = [ − ( + Λ α )−1 α]Q w∗ (7.42) Comparing equation and equation , we see that if the hyperparameters 7.40 7.42 , α τ , and are chosen such that ( ) I − Λ τ = ( + ) Λ αI −1 α, (7.43) then L2 regularization and early stopping can be seen to be equivalent (at least under the quadratic approximation of the objective function). Going even further, by taking logarithms and using the series expansion for log(1 +x), we can conclude that if all λi are small (that is, λi  1 and λi/α  1) then τ ≈ 1 α , (7.44) α ≈ 1 τ . (7.45) That is, under these assumptions, the number of training iterations τ plays a role inversely proportional to the L2 regularization parameter, and the inverse of τ plays the role of the weight decay coefficient. Parameter values corresponding to directions of significant curvature (of the objective function) are regularized less than directions of less curvature. Of course, in the context of early stopping, this really means that parameters that correspond to directions of significant curvature tend to learn early relative to parameters corresponding to directions of less curvature. The derivations in this section have shown that a trajectory of length τ ends at a point that corresponds to a minimum of the L2-regularized objective. Early stopping is of course more than the mere restriction of the trajectory length; instead, early stopping typically involves monitoring the validation set error in order to stop the trajectory at a particularly good point in space. Early stopping therefore has the advantage over weight decay that early stopping automatically determines the correct amount of regularization while weight decay requires many training experiments with different values of its hyperparameter. 252
  • 269. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 7.9 Parameter Tying and Parameter Sharing Thus far, in this chapter, when we have discussed adding constraints or penalties to the parameters, we have always done so with respect to a fixed region or point. For example, L2 regularization (or weight decay) penalizes model parameters for deviating from the fixed value of zero. However, sometimes we may need other ways to express our prior knowledge about suitable values of the model parameters. Sometimes we might not know precisely what values the parameters should take but we know, from knowledge of the domain and model architecture, that there should be some dependencies between the model parameters. A common type of dependency that we often want to express is that certain parameters should be close to one another. Consider the following scenario: we have two models performing the same classification task (with the same set of classes) but with somewhat different input distributions. Formally, we have model A with parameters w( ) A and model B with parameters w( ) B . The two models map the input to two different, but related outputs: ŷ( ) A = f(w( ) A , x) and ŷ( ) B = ( g w( ) B , x). Let us imagine that the tasks are similar enough (perhaps with similar input and output distributions) that we believe the model parameters should be close to each other: ∀i, w( ) A i should be close to w( ) B i . We can leverage this information through regularization. Specifically, we can use a parameter norm penalty of the form: Ω(w( ) A , w( ) B ) = w( ) A − w( ) B 2 2. Here we used an L2 penalty, but other choices are also possible. This kind of approach was proposed by ( ), who regularized Lasserre et al. 2006 the parameters of one model, trained as a classifier in a supervised paradigm, to be close to the parameters of another model, trained in an unsupervised paradigm (to capture the distribution of the observed input data). The architectures were constructed such that many of the parameters in the classifier model could be paired to corresponding parameters in the unsupervised model. While a parameter norm penalty is one way to regularize parameters to be close to one another, the more popular way is to use constraints: to force sets of parameters to be equal. This method of regularization is often referred to as parameter sharing, because we interpret the various models or model components as sharing a unique set of parameters. A significant advantage of parameter sharing over regularizing the parameters to be close (via a norm penalty) is that only a subset of the parameters (the unique set) need to be stored in memory. In certain models—such as the convolutional neural network—this can lead to significant reduction in the memory footprint of the model. 253
  • 270. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING Convolutional Neural Networks By far the most popular and extensive use of parameter sharing occurs in convolutional neural networks (CNNs) applied to computer vision. Natural images have many statistical properties that are invariant to translation. For example, a photo of a cat remains a photo of a cat if it is translated one pixel to the right. CNNs take this property into account by sharing parameters across multiple image locations. The same feature (a hidden unit with the same weights) is computed over different locations in the input. This means that we can find a cat with the same cat detector whether the cat appears at column i or column i + 1 in the image. Parameter sharing has allowed CNNs to dramatically lower the number of unique model parameters and to significantly increase network sizes without requiring a corresponding increase in training data. It remains one of the best examples of how to effectively incorporate domain knowledge into the network architecture. CNNs will be discussed in more detail in chapter . 9 7.10 Sparse Representations Weight decay acts by placing a penalty directly on the model parameters. Another strategy is to place a penalty on the activations of the units in a neural network, encouraging their activations to be sparse. This indirectly imposes a complicated penalty on the model parameters. We have already discussed (in section ) how 7.1.2 L1 penalization induces a sparse parametrization—meaning that many of the parameters become zero (or close to zero). Representational sparsity, on the other hand, describes a representation where many of the elements of the representation are zero (or close to zero). A simplified view of this distinction can be illustrated in the context of linear regression:       18 5 15 −9 −3       =       4 0 0 2 0 0 − 0 0 1 0 3 0 − 0 5 0 0 0 0 1 0 0 1 0 4 − − 1 0 0 0 5 0 −               2 3 −2 −5 1 4         y ∈ Rm A ∈ Rm n × x ∈ Rn (7.46) 254
  • 271. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING       −14 1 19 2 23       =       3 1 2 5 4 1 − − 4 2 3 1 1 3 − − − − − 1 5 4 2 3 2 3 1 2 3 0 3 − − − − − − 5 4 2 2 5 1               0 2 0 0 −3 0         y ∈ Rm B ∈ Rm n × h ∈ Rn (7.47) In the first expression, we have an example of a sparsely parametrized linear regression model. In the second, we have linear regression with a sparse representa- tion h of the data x. That is, h is a function of x that, in some sense, represents the information present in , but does so with a sparse vector. x Representational regularization is accomplished by the same sorts of mechanisms that we have used in parameter regularization. Norm penalty regularization of representations is performed by adding to the loss function J a norm penalty on the representation. This penalty is denoted Ω( ) h . As before, we denote the regularized loss function by ˜ J: ˜ J , J , α ( ; θ X y) = ( ; θ X y) + Ω( ) h (7.48) where α ∈ [0, ∞) weights the relative contribution of the norm penalty term, with larger values of corresponding to more regularization. α Just as an L1 penalty on the parameters induces parameter sparsity, an L1 penalty on the elements of the representation induces representational sparsity: Ω(h) = || || h 1 =  i |hi|. Of course, the L1 penalty is only one choice of penalty that can result in a sparse representation. Others include the penalty derived from a Student-t prior on the representation ( , ; , ) Olshausen and Field 1996 Bergstra 2011 and KL divergence penalties ( , ) that are especially Larochelle and Bengio 2008 useful for representations with elements constrained to lie on the unit interval. Lee 2008 Goodfellow 2009 et al. ( ) and et al. ( ) both provide examples of strategies based on regularizing the average activation across several examples, 1 m  i h( ) i , to be near some target value, such as a vector with .01 for each entry. Other approaches obtain representational sparsity with a hard constraint on the activation values. For example, orthogonal matching pursuit (Pati et al., 1993) encodes an input x with the representation h that solves the constrained optimization problem arg min h h , 0<k  −  x W h 2 , (7.49) where   h 0 is the number of non-zero entries of h . This problem can be solved efficiently when W is constrained to be orthogonal. This method is often called 255
  • 272. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING OMP-k with the value of k specified to indicate the number of non-zero features allowed. ( ) demonstrated that OMP- can be a very effective Coates and Ng 2011 1 feature extractor for deep architectures. Essentially any model that has hidden units can be made sparse. Throughout this book, we will see many examples of sparsity regularization used in a variety of contexts. 7.11 Bagging and Other Ensemble Methods Bagging (short for bootstrap aggregating) is a technique for reducing gen- eralization error by combining several models ( , ). The idea is to Breiman 1994 train several different models separately, then have all of the models vote on the output for test examples. This is an example of a general strategy in machine learning called model averaging. Techniques employing this strategy are known as ensemble methods. The reason that model averaging works is that different models will usually not make all the same errors on the test set. Consider for example a set of k regression models. Suppose that each model makes an error i on each example, with the errors drawn from a zero-mean multivariate normal distribution with variances E[2 i] = v and covariances E[ij] = c. Then the error made by the average prediction of all the ensemble models is 1 k  i i. The expected squared error of the ensemble predictor is E    1 k  i i 2   = 1 k2 E    i  2 i +  j i = ij     (7.50) = 1 k v + k − 1 k c. (7.51) In the case where the errors are perfectly correlated and c = v, the mean squared error reduces to v, so the model averaging does not help at all. In the case where the errors are perfectly uncorrelated and c = 0, the expected squared error of the ensemble is only 1 k v. This means that the expected squared error of the ensemble decreases linearly with the ensemble size. In other words, on average, the ensemble will perform at least as well as any of its members, and if the members make independent errors, the ensemble will perform significantly better than its members. Different ensemble methods construct the ensemble of models in different ways. For example, each member of the ensemble could be formed by training a completely 256
  • 273. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING 8 8 First ensemble member Second ensemble member Original dataset First resampled dataset Second resampled dataset Figure 7.5: A cartoon depiction of how bagging works. Suppose we train an 8 detector on the dataset depicted above, containing an 8, a 6 and a 9. Suppose we make two different resampled datasets. The bagging training procedure is to construct each of these datasets by sampling with replacement. The first dataset omits the 9 and repeats the 8. On this dataset, the detector learns that a loop on top of the digit corresponds to an 8. On the second dataset, we repeat the 9 and omit the 6. In this case, the detector learns that a loop on the bottom of the digit corresponds to an 8. Each of these individual classification rules is brittle, but if we average their output then the detector is robust, achieving maximal confidence only when both loops of the 8 are present. different kind of model using a different algorithm or objective function. Bagging is a method that allows the same kind of model, training algorithm and objective function to be reused several times. Specifically, bagging involves constructing k different datasets. Each dataset has the same number of examples as the original dataset, but each dataset is constructed by sampling with replacement from the original dataset. This means that, with high probability, each dataset is missing some of the examples from the original dataset and also contains several duplicate examples (on average around 2/3 of the examples from the original dataset are found in the resulting training set, if it has the same size as the original). Model i is then trained on dataset i. The differences between which examples are included in each dataset result in differences between the trained models. See figure for an example. 7.5 Neural networks reach a wide enough variety of solution points that they can often benefit from model averaging even if all of the models are trained on the same dataset. Differences in random initialization, random selection of minibatches, differences in hyperparameters, or different outcomes of non-deterministic imple- mentations of neural networks are often enough to cause different members of the 257
  • 274. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING ensemble to make partially independent errors. Model averaging is an extremely powerful and reliable method for reducing generalization error. Its use is usually discouraged when benchmarking algorithms for scientific papers, because any machine learning algorithm can benefit substan- tially from model averaging at the price of increased computation and memory. For this reason, benchmark comparisons are usually made using a single model. Machine learning contests are usually won by methods using model averag- ing over dozens of models. A recent prominent example is the Netflix Grand Prize (Koren 2009 , ). Not all techniques for constructing ensembles are designed to make the ensemble more regularized than the individual models. For example, a technique called boosting (Freund and Schapire 1996b a , , ) constructs an ensemble with higher capacity than the individual models. Boosting has been applied to build ensembles of neural networks (Schwenk and Bengio 1998 , ) by incrementally adding neural networks to the ensemble. Boosting has also been applied interpreting an individual neural network as an ensemble ( , ), incrementally adding hidden Bengio et al. 2006a units to the neural network. 7.12 Dropout Dropout (Srivastava 2014 et al., ) provides a computationally inexpensive but powerful method of regularizing a broad family of models. To a first approximation, dropout can be thought of as a method of making bagging practical for ensembles of very many large neural networks. Bagging involves training multiple models, and evaluating multiple models on each test example. This seems impractical when each model is a large neural network, since training and evaluating such networks is costly in terms of runtime and memory. It is common to use ensembles of five to ten neural networks— ( ) used six to win the ILSVRC— Szegedy et al. 2014a but more than this rapidly becomes unwieldy. Dropout provides an inexpensive approximation to training and evaluating a bagged ensemble of exponentially many neural networks. Specifically, dropout trains the ensemble consisting of all sub-networks that can be formed by removing non-output units from an underlying base network, as illustrated in figure . In most modern neural networks, based on a series of 7.6 affine transformations and nonlinearities, we can effectively remove a unit from a network by multiplying its output value by zero. This procedure requires some slight modification for models such as radial basis function networks, which take 258
  • 275. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING the difference between the unit’s state and some reference value. Here, we present the dropout algorithm in terms of multiplication by zero for simplicity, but it can be trivially modified to work with other operations that remove a unit from the network. Recall that to learn with bagging, we define k different models, construct k different datasets by sampling from the training set with replacement, and then train model i on dataset i. Dropout aims to approximate this process, but with an exponentially large number of neural networks. Specifically, to train with dropout, we use a minibatch-based learning algorithm that makes small steps, such as stochastic gradient descent. Each time we load an example into a minibatch, we randomly sample a different binary mask to apply to all of the input and hidden units in the network. The mask for each unit is sampled independently from all of the others. The probability of sampling a mask value of one (causing a unit to be included) is a hyperparameter fixed before training begins. It is not a function of the current value of the model parameters or the input example. Typically, an input unit is included with probability 0.8 and a hidden unit is included with probability 0.5. We then run forward propagation, back-propagation, and the learning update as usual. Figure illustrates how to run forward propagation 7.7 with dropout. More formally, suppose that a mask vector µ specifies which units to include, and J (θ µ , ) defines the cost of the model defined by parameters θ and mask µ. Then dropout training consists in minimizing EµJ(θ µ , ). The expectation contains exponentially many terms but we can obtain an unbiased estimate of its gradient by sampling values of . µ Dropout training is not quite the same as bagging training. In the case of bagging, the models are all independent. In the case of dropout, the models share parameters, with each model inheriting a different subset of parameters from the parent neural network. This parameter sharing makes it possible to represent an exponential number of models with a tractable amount of memory. In the case of bagging, each model is trained to convergence on its respective training set. In the case of dropout, typically most models are not explicitly trained at all—usually, the model is large enough that it would be infeasible to sample all possible sub- networks within the lifetime of the universe. Instead, a tiny fraction of the possible sub-networks are each trained for a single step, and the parameter sharing causes the remaining sub-networks to arrive at good settings of the parameters. These are the only differences. Beyond these, dropout follows the bagging algorithm. For example, the training set encountered by each sub-network is indeed a subset of the original training set sampled with replacement. 259
  • 276. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING y y h1 h1 h2 h2 x1 x1 x2 x2 y y h1 h1 h2 h2 x1 x1 x2 x2 y y h1 h1 h2 h2 x2 x2 y y h1 h1 h2 h2 x1 x1 y y h2 h2 x1 x1 x2 x2 y y h1 h1 x1 x1 x2 x2 y y h1 h1 h2 h2 y y x1 x1 x2 x2 y y h2 h2 x2 x2 y y h1 h1 x1 x1 y y h1 h1 x2 x2 y y h2 h2 x1 x1 y y x1 x1 y y x2 x2 y y h2 h2 y y h1 h1 y y Base network Ensemble of subnetworks Figure 7.6: Dropout trains an ensemble consisting of all sub-networks that can be constructed by removing non-output units from an underlying base network. Here, we begin with a base network with two visible units and two hidden units. There are sixteen possible subsets of these four units. We show all sixteen subnetworks that may be formed by dropping out different subsets of units from the original network. In this small example, a large proportion of the resulting networks have no input units or no path connecting the input to the output. This problem becomes insignificant for networks with wider layers, where the probability of dropping all possible paths from inputs to outputs becomes smaller. 260
  • 277. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING ˆ x1 ˆ x1 µx1 µx1 x1 x1 ˆ x2 ˆ x2 x2 x2 µx2 µx2 h1 h1 h2 h2 µh1 µh1 µh2 µh2 ˆ h1 ˆ h1 ˆ h2 ˆ h2 y y y y h1 h1 h2 h2 x1 x1 x2 x2 Figure 7.7: An example of forward propagation through a feedforward network using dropout. (Top)In this example, we use a feedforward network with two input units, one hidden layer with two hidden units, and one output unit. To perform forward (Bottom) propagation with dropout, we randomly sample a vector µ with one entry for each input or hidden unit in the network. The entries of µ are binary and are sampled independently from each other. The probability of each entry being is a hyperparameter, usually 1 0 .5 for the hidden layers and 0.8 for the input. Each unit in the network is multiplied by the corresponding mask, and then forward propagation continues through the rest of the network as usual. This is equivalent to randomly selecting one of the sub-networks from figure and running forward propagation through it. 7.6 261
  • 278. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING To make a prediction, a bagged ensemble must accumulate votes from all of its members. We refer to this process as inference in this context. So far, our description of bagging and dropout has not required that the model be explicitly probabilistic. Now, we assume that the model’s role is to output a probability distribution. In the case of bagging, each model iproduces a probability distribution p( ) i (y | x). The prediction of the ensemble is given by the arithmetic mean of all of these distributions, 1 k k  i=1 p( ) i ( ) y | x . (7.52) In the case of dropout, each sub-model defined by mask vector µ defines a prob- ability distribution p(y , | x µ). The arithmetic mean over all masks is given by  µ p p y , ( ) µ ( | x µ) (7.53) where p(µ) is the probability distribution that was used to sample µ at training time. Because this sum includes an exponential number of terms, it is intractable to evaluate except in cases where the structure of the model permits some form of simplification. So far, deep neural nets are not known to permit any tractable simplification. Instead, we can approximate the inference with sampling, by averaging together the output from many masks. Even 10-20 masks are often sufficient to obtain good performance. However, there is an even better approach, that allows us to obtain a good approximation to the predictions of the entire ensemble, at the cost of only one forward propagation. To do so, we change to using the geometric mean rather than the arithmetic mean of the ensemble members’ predicted distributions. Warde- Farley 2014 et al. ( ) present arguments and empirical evidence that the geometric mean performs comparably to the arithmetic mean in this context. The geometric mean of multiple probability distributions is not guaranteed to be a probability distribution. To guarantee that the result is a probability distribution, we impose the requirement that none of the sub-models assigns probability 0 to any event, and we renormalize the resulting distribution. The unnormalized probability distribution defined directly by the geometric mean is given by p̃ensemble( ) = y | x 2 d  µ p y , ( | x µ) (7.54) where d is the number of units that may be dropped. Here we use a uniform distribution over µ to simplify the presentation, but non-uniform distributions are 262
  • 279. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING also possible. To make predictions we must re-normalize the ensemble: pensemble( ) = y | x p̃ensemble( ) y | x  y p̃ensemble(y | x) . (7.55) A key insight ( , ) involved in dropout is that we can approxi- Hinton et al. 2012c mate pensemble by evaluating p(y | x) in one model: the model with all units, but with the weights going out of unit i multiplied by the probability of including unit i. The motivation for this modification is to capture the right expected value of the output from that unit. We call this approach the weight scaling inference rule. There is not yet any theoretical argument for the accuracy of this approximate inference rule in deep nonlinear networks, but empirically it performs very well. Because we usually use an inclusion probability of 1 2 , the weight scaling rule usually amounts to dividing the weights by at the end of training, and then using 2 the model as usual. Another way to achieve the same result is to multiply the states of the units by during training. Either way, the goal is to make sure that 2 the expected total input to a unit at test time is roughly the same as the expected total input to that unit at train time, even though half the units at train time are missing on average. For many classes of models that do not have nonlinear hidden units, the weight scaling inference rule is exact. For a simple example, consider a softmax regression classifier with input variables represented by the vector : n v P y ( = y | v) = softmax  W v + b  y . (7.56) We can index into the family of sub-models by element-wise multiplication of the input with a binary vector : d P y ( = y | v; ) = d softmax  W  ( ) + d  v b  y . (7.57) The ensemble predictor is defined by re-normalizing the geometric mean over all ensemble members’ predictions: Pensemble( = ) = y y | v P̃ensemble( = ) y y | v  y P̃ensemble( = y y | v) (7.58) where P̃ensemble( = ) = y y | v 2n   d∈{ } 0 1 , n P y . ( = y | v; ) d (7.59) 263
  • 280. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING To see that the weight scaling rule is exact, we can simplify ˜ Pensemble: P̃ensemble( = ) = y y | v 2 n   d∈{ } 0 1 , n P y ( = y | v; ) d (7.60) = 2n   d∈{ } 0 1 , n softmax (W ( ) + ) d  v b y (7.61) = 2 n      d∈{ } 0 1 , n exp  W  y,:( ) + d  v by   y exp  W  y,:( ) + d  v by  (7.62) = 2 n  d∈{ } 0 1 , n exp  W  y,:( ) + d  v by  2 n   d∈{ } 0 1 , n  y exp  W y,: ( ) + d  v by  (7.63) Because P̃ will be normalized, we can safely ignore multiplication by factors that are constant with respect to : y ˜ Pensemble( = ) y y | v ∝ 2 n   d∈{ } 0 1 , n exp  W  y,:( ) + d  v by  (7.64) = exp   1 2n  d∈{ } 0 1 , n W y,:( ) + d  v by   (7.65) = exp  1 2 W  y,:v + by  . (7.66) Substituting this back into equation we obtain a softmax classifier with weights 7.58 1 2W. The weight scaling rule is also exact in other settings, including regression networks with conditionally normal outputs, and deep networks that have hidden layers without nonlinearities. However, the weight scaling rule is only an approxi- mation for deep models that have nonlinearities. Though the approximation has not been theoretically characterized, it often works well, empirically. Goodfellow et al. ( ) found experimentally that the weight scaling approximation can work 2013a better (in terms of classification accuracy) than Monte Carlo approximations to the ensemble predictor. This held true even when the Monte Carlo approximation was allowed to sample up to 1,000 sub-networks. ( ) found Gal and Ghahramani 2015 that some models obtain better classification accuracy using twenty samples and 264
  • 281. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING the Monte Carlo approximation. It appears that the optimal choice of inference approximation is problem-dependent. Srivastava 2014 et al. ( ) showed that dropout is more effective than other standard computationally inexpensive regularizers, such as weight decay, filter norm constraints and sparse activity regularization. Dropout may also be combined with other forms of regularization to yield a further improvement. One advantage of dropout is that it is very computationally cheap. Using dropout during training requires only O(n) computation per example per update, to generate n random binary numbers and multiply them by the state. Depending on the implementation, it may also require O(n) memory to store these binary numbers until the back-propagation stage. Running inference in the trained model has the same cost per-example as if dropout were not used, though we must pay the cost of dividing the weights by 2 once before beginning to run inference on examples. Another significant advantage of dropout is that it does not significantly limit the type of model or training procedure that can be used. It works well with nearly any model that uses a distributed representation and can be trained with stochastic gradient descent. This includes feedforward neural networks, probabilistic models such as restricted Boltzmann machines (Srivastava 2014 et al., ), and recurrent neural networks (Bayer and Osendorfer 2014 Pascanu 2014a , ; et al., ). Many other regularization strategies of comparable power impose more severe restrictions on the architecture of the model. Though the cost per-step of applying dropout to a specific model is negligible, the cost of using dropout in a complete system can be significant. Because dropout is a regularization technique, it reduces the effective capacity of a model. To offset this effect, we must increase the size of the model. Typically the optimal validation set error is much lower when using dropout, but this comes at the cost of a much larger model and many more iterations of the training algorithm. For very large datasets, regularization confers little reduction in generalization error. In these cases, the computational cost of using dropout and larger models may outweigh the benefit of regularization. When extremely few labeled training examples are available, dropout is less effective. Bayesian neural networks ( , ) outperform dropout on the Neal 1996 Alternative Splicing Dataset ( , ) where fewer than 5,000 examples Xiong et al. 2011 are available (Srivastava 2014 et al., ). When additional unlabeled data is available, unsupervised feature learning can gain an advantage over dropout. Wager 2013 et al. ( ) showed that, when applied to linear regression, dropout is equivalent to L2 weight decay, with a different weight decay coefficient for 265
  • 282. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING each input feature. The magnitude of each feature’s weight decay coefficient is determined by its variance. Similar results hold for other linear models. For deep models, dropout is not equivalent to weight decay. The stochasticity used while training with dropout is not necessary for the approach’s success. It is just a means of approximating the sum over all sub- models. Wang and Manning 2013 ( ) derived analytical approximations to this marginalization. Their approximation, known as fast dropout resulted in faster convergence time due to the reduced stochasticity in the computation of the gradient. This method can also be applied at test time, as a more principled (but also more computationally expensive) approximation to the average over all sub-networks than the weight scaling approximation. Fast dropout has been used to nearly match the performance of standard dropout on small neural network problems, but has not yet yielded a significant improvement or been applied to a large problem. Just as stochasticity is not necessary to achieve the regularizing effect of dropout, it is also not sufficient. To demonstrate this, Warde-Farley 2014 et al. ( ) designed control experiments using a method called dropout boosting that they designed to use exactly the same mask noise as traditional dropout but lack its regularizing effect. Dropout boosting trains the entire ensemble to jointly maximize the log-likelihood on the training set. In the same sense that traditional dropout is analogous to bagging, this approach is analogous to boosting. As intended, experiments with dropout boosting show almost no regularization effect compared to training the entire network as a single model. This demonstrates that the interpretation of dropout as bagging has value beyond the interpretation of dropout as robustness to noise. The regularization effect of the bagged ensemble is only achieved when the stochastically sampled ensemble members are trained to perform well independently of each other. Dropout has inspired other stochastic approaches to training exponentially large ensembles of models that share weights. DropConnect is a special case of dropout where each product between a single scalar weight and a single hidden unit state is considered a unit that can be dropped (Wan 2013 et al., ). Stochastic pooling is a form of randomized pooling (see section ) for building ensembles 9.3 of convolutional networks with each convolutional network attending to different spatial locations of each feature map. So far, dropout remains the most widely used implicit ensemble method. One of the key insights of dropout is that training a network with stochastic behavior and making predictions by averaging over multiple stochastic decisions implements a form of bagging with parameter sharing. Earlier, we described 266
  • 283. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING dropout as bagging an ensemble of models formed by including or excluding units. However, there is no need for this model averaging strategy to be based on inclusion and exclusion. In principle, any kind of random modification is admissible. In practice, we must choose modification families that neural networks are able to learn to resist. Ideally, we should also use model families that allow a fast approximate inference rule. We can think of any form of modification parametrized by a vector µ as training an ensemble consisting of p(y , | x µ) for all possible values of µ. There is no requirement that µ have a finite number of values. For example, µ can be real-valued. Srivastava 2014 et al. ( ) showed that multiplying the weights by µ ∼ N (1, I) can outperform dropout based on binary masks. Because E[µ] = 1 the standard network automatically implements approximate inference in the ensemble, without needing any weight scaling. So far we have described dropout purely as a means of performing efficient, approximate bagging. However, there is another view of dropout that goes further than this. Dropout trains not just a bagged ensemble of models, but an ensemble of models that share hidden units. This means each hidden unit must be able to perform well regardless of which other hidden units are in the model. Hidden units must be prepared to be swapped and interchanged between models. Hinton et al. ( ) were inspired by an idea from biology: sexual reproduction, which involves 2012c swapping genes between two different organisms, creates evolutionary pressure for genes to become not just good, but to become readily swapped between different organisms. Such genes and such features are very robust to changes in their environment because they are not able to incorrectly adapt to unusual features of any one organism or model. Dropout thus regularizes each hidden unit to be not merely a good feature but a feature that is good in many contexts. Warde- Farley 2014 et al. ( ) compared dropout training to training of large ensembles and concluded that dropout offers additional improvements to generalization error beyond those obtained by ensembles of independent models. It is important to understand that a large portion of the power of dropout arises from the fact that the masking noise is applied to the hidden units. This can be seen as a form of highly intelligent, adaptive destruction of the information content of the input rather than destruction of the raw values of the input. For example, if the model learns a hidden unit hi that detects a face by finding the nose, then dropping hi corresponds to erasing the information that there is a nose in the image. The model must learn another hi, either that redundantly encodes the presence of a nose, or that detects the face by another feature, such as the mouth. Traditional noise injection techniques that add unstructured noise at the input are not able to randomly erase the information about a nose from an image of a face unless the magnitude of the noise is so great that nearly all of the information in 267
  • 284. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING the image is removed. Destroying extracted features rather than original values allows the destruction process to make use of all of the knowledge about the input distribution that the model has acquired so far. Another important aspect of dropout is that the noise is multiplicative. If the noise were additive with fixed scale, then a rectified linear hidden unit hi with added noise  could simply learn to have hi become very large in order to make the added noise  insignificant by comparison. Multiplicative noise does not allow such a pathological solution to the noise robustness problem. Another deep learning algorithm, batch normalization, reparametrizes the model in a way that introduces both additive and multiplicative noise on the hidden units at training time. The primary purpose of batch normalization is to improve optimization, but the noise can have a regularizing effect, and sometimes makes dropout unnecessary. Batch normalization is described further in section . 8.7.1 7.13 Adversarial Training In many cases, neural networks have begun to reach human performance when evaluated on an i.i.d. test set. It is natural therefore to wonder whether these models have obtained a true human-level understanding of these tasks. In order to probe the level of understanding a network has of the underlying task, we can search for examples that the model misclassifies. ( ) found that Szegedy et al. 2014b even neural networks that perform at human level accuracy have a nearly 100% error rate on examples that are intentionally constructed by using an optimization procedure to search for an input x near a data point x such that the model output is very different at x. In many cases, x can be so similar to x that a human observer cannot tell the difference between the original example and the adversarial example, but the network can make highly different predictions. See figure for an example. 7.8 Adversarial examples have many implications, for example, in computer security, that are beyond the scope of this chapter. However, they are interesting in the context of regularization because one can reduce the error rate on the original i.i.d. test set via adversarial training—training on adversarially perturbed examples from the training set ( , ; Szegedy et al. 2014b Goodfellow 2014b et al., ). Goodfellow 2014b et al. ( ) showed that one of the primary causes of these adversarial examples is excessive linearity. Neural networks are built out of primarily linear building blocks. In some experiments the overall function they implement proves to be highly linear as a result. These linear functions are easy 268
  • 285. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING + .007 × = x sign(∇xJ(θ x , , y)) x +  sign(∇xJ(θ x , , y)) y =“panda” “nematode” “gibbon” w/ 57.7% confidence w/ 8.2% confidence w/ 99.3 % confidence Figure 7.8: A demonstration of adversarial example generation applied to GoogLeNet ( , ) on ImageNet. By adding an imperceptibly small vector whose Szegedy et al. 2014a elements are equal to the sign of the elements of the gradient of the cost function with respect to the input, we can change GoogLeNet’s classification of the image. Reproduced with permission from ( ). Goodfellow et al. 2014b to optimize. Unfortunately, the value of a linear function can change very rapidly if it has numerous inputs. If we change each input by , then a linear function with weights w can change by as much as || || w 1, which can be a very large amount if w is high-dimensional. Adversarial training discourages this highly sensitive locally linear behavior by encouraging the network to be locally constant in the neighborhood of the training data. This can be seen as a way of explicitly introducing a local constancy prior into supervised neural nets. Adversarial training helps to illustrate the power of using a large function family in combination with aggressive regularization. Purely linear models, like logistic regression, are not able to resist adversarial examples because they are forced to be linear. Neural networks are able to represent functions that can range from nearly linear to nearly locally constant and thus have the flexibility to capture linear trends in the training data while still learning to resist local perturbation. Adversarial examples also provide a means of accomplishing semi-supervised learning. At a point x that is not associated with a label in the dataset, the model itself assigns some label ŷ. The model’s label ŷ may not be the true label, but if the model is high quality, then ŷ has a high probability of providing the true label. We can seek an adversarial example x that causes the classifier to output a label y with y = ŷ. Adversarial examples generated using not the true label but a label provided by a trained model are called virtual adversarial examples (Miyato 2015 et al., ). The classifier may then be trained to assign the same label to x and x. This encourages the classifier to learn a function that is 269
  • 286. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING robust to small changes anywhere along the manifold where the unlabeled data lies. The assumption motivating this approach is that different classes usually lie on disconnected manifolds, and a small perturbation should not be able to jump from one class manifold to another class manifold. 7.14 Tangent Distance, Tangent Prop, and Manifold Tangent Classifier Many machine learning algorithms aim to overcome the curse of dimensionality by assuming that the data lies near a low-dimensional manifold, as described in section . 5.11.3 One of the early attempts to take advantage of the manifold hypothesis is the tangent distance algorithm ( , , ). It is a non-parametric Simard et al. 1993 1998 nearest-neighbor algorithm in which the metric used is not the generic Euclidean distance but one that is derived from knowledge of the manifolds near which probability concentrates. It is assumed that we are trying to classify examples and that examples on the same manifold share the same category. Since the classifier should be invariant to the local factors of variation that correspond to movement on the manifold, it would make sense to use as nearest-neighbor distance between points x1 and x2 the distance between the manifolds M1 and M2 to which they respectively belong. Although that may be computationally difficult (it would require solving an optimization problem, to find the nearest pair of points on M1 and M2), a cheap alternative that makes sense locally is to approximate Mi by its tangent plane at xi and measure the distance between the two tangents, or between a tangent plane and a point. That can be achieved by solving a low-dimensional linear system (in the dimension of the manifolds). Of course, this algorithm requires one to specify the tangent vectors. In a related spirit, the tangent prop algorithm ( , ) (figure ) Simard et al. 1992 7.9 trains a neural net classifier with an extra penalty to make each output f(x) of the neural net locally invariant to known factors of variation. These factors of variation correspond to movement along the manifold near which examples of the same class concentrate. Local invariance is achieved by requiring ∇xf (x) to be orthogonal to the known manifold tangent vectors v( ) i at x, or equivalently that the directional derivative of f at x in the directions v( ) i be small by adding a regularization penalty : Ω Ω( ) = f  i  (∇xf( )) x  v( ) i 2 . (7.67) 270
  • 287. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING This regularizer can of course be scaled by an appropriate hyperparameter, and, for most neural networks, we would need to sum over many outputs rather than the lone output f(x) described here for simplicity. As with the tangent distance algorithm, the tangent vectors are derived a priori, usually from the formal knowledge of the effect of transformations such as translation, rotation, and scaling in images. Tangent prop has been used not just for supervised learning ( , ) Simard et al. 1992 but also in the context of reinforcement learning ( , ). Thrun 1995 Tangent propagation is closely related to dataset augmentation. In both cases, the user of the algorithm encodes his or her prior knowledge of the task by specifying a set of transformations that should not alter the output of the network. The difference is that in the case of dataset augmentation, the network is explicitly trained to correctly classify distinct inputs that were created by applying more than an infinitesimal amount of these transformations. Tangent propagation does not require explicitly visiting a new input point. Instead, it analytically regularizes the model to resist perturbation in the directions corresponding to the specified transformation. While this analytical approach is intellectually elegant, it has two major drawbacks. First, it only regularizes the model to resist infinitesimal perturbation. Explicit dataset augmentation confers resistance to larger perturbations. Second, the infinitesimal approach poses difficulties for models based on rectified linear units. These models can only shrink their derivatives by turning units off or shrinking their weights. They are not able to shrink their derivatives by saturating at a high value with large weights, as sigmoid or tanh units can. Dataset augmentation works well with rectified linear units because different subsets of rectified units can activate for different transformed versions of each original input. Tangent propagation is also related to double backprop (Drucker and LeCun, 1992) and adversarial training ( , ; , ). Szegedy et al. 2014b Goodfellow et al. 2014b Double backprop regularizes the Jacobian to be small, while adversarial training finds inputs near the original inputs and trains the model to produce the same output on these as on the original inputs. Tangent propagation and dataset augmentation using manually specified transformations both require that the model should be invariant to certain specified directions of change in the input. Double backprop and adversarial training both require that the model should be invariant to directions of change in the input so long as the change is small. Just all as dataset augmentation is the non-infinitesimal version of tangent propagation, adversarial training is the non-infinitesimal version of double backprop. The manifold tangent classifier ( , ), eliminates the need to Rifai et al. 2011c know the tangent vectors a priori. As we will see in chapter , autoencoders can 14 271
  • 288. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING x1 x 2 Normal Tangent Figure 7.9: Illustration of the main idea of the tangent prop algorithm ( , Simard et al. 1992 Rifai 2011c ) and manifold tangent classifier ( et al., ), which both regularize the classifier output function f(x). Each curve represents the manifold for a different class, illustrated here as a one-dimensional manifold embedded in a two-dimensional space. On one curve, we have chosen a single point and drawn a vector that is tangent to the class manifold (parallel to and touching the manifold) and a vector that is normal to the class manifold (orthogonal to the manifold). In multiple dimensions there may be many tangent directions and many normal directions. We expect the classification function to change rapidly as it moves in the direction normal to the manifold, and not to change as it moves along the class manifold. Both tangent propagation and the manifold tangent classifier regularize f(x) to not change very much asx moves along the manifold. Tangent propagation requires the user to manually specify functions that compute the tangent directions (such as specifying that small translations of images remain in the same class manifold) while the manifold tangent classifier estimates the manifold tangent directions by training an autoencoder to fit the training data. The use of autoencoders to estimate manifolds will be described in chapter . 14 estimate the manifold tangent vectors. The manifold tangent classifier makes use of this technique to avoid needing user-specified tangent vectors. As illustrated in figure , these estimated tangent vectors go beyond the classical invariants 14.10 that arise out of the geometry of images (such as translation, rotation and scaling) and include factors that must be learned because they are object-specific (such as moving body parts). The algorithm proposed with the manifold tangent classifier is therefore simple: (1) use an autoencoder to learn the manifold structure by unsupervised learning, and (2) use these tangents to regularize a neural net classifier as in tangent prop (equation ). 7.67 This chapter has described most of the general strategies used to regularize neural networks. Regularization is a central theme of machine learning and as such 272
  • 289. CHAPTER 7. REGULARIZATION FOR DEEP LEARNING will be revisited periodically by most of the remaining chapters. Another central theme of machine learning is optimization, described next. 273
  • 290. Chapter 8 Optimization for Training Deep Models Deep learning algorithms involve optimization in many contexts. For example, performing inference in models such as PCA involves solving an optimization problem. We often use analytical optimization to write proofs or design algorithms. Of all of the many optimization problems involved in deep learning, the most difficult is neural network training. It is quite common to invest days to months of time on hundreds of machines in order to solve even a single instance of the neural network training problem. Because this problem is so important and so expensive, a specialized set of optimization techniques have been developed for solving it. This chapter presents these optimization techniques for neural network training. If you are unfamiliar with the basic principles of gradient-based optimization, we suggest reviewing chapter . That chapter includes a brief overview of numerical 4 optimization in general. This chapter focuses on one particular case of optimization: finding the param- eters θ of a neural network that significantly reduce a cost function J(θ), which typically includes a performance measure evaluated on the entire training set as well as additional regularization terms. We begin with a description of how optimization used as a training algorithm for a machine learning task differs from pure optimization. Next, we present several of the concrete challenges that make optimization of neural networks difficult. We then define several practical algorithms, including both optimization algorithms themselves and strategies for initializing the parameters. More advanced algorithms adapt their learning rates during training or leverage information contained in 274
  • 291. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS the second derivatives of the cost function. Finally, we conclude with a review of several optimization strategies that are formed by combining simple optimization algorithms into higher-level procedures. 8.1 How Learning Differs from Pure Optimization Optimization algorithms used for training of deep models differ from traditional optimization algorithms in several ways. Machine learning usually acts indirectly. In most machine learning scenarios, we care about some performance measure P, that is defined with respect to the test set and may also be intractable. We therefore optimize P only indirectly. We reduce a different cost function J(θ) in the hope that doing so will improve P. This is in contrast to pure optimization, where minimizing J is a goal in and of itself. Optimization algorithms for training deep models also typically include some specialization on the specific structure of machine learning objective functions. Typically, the cost function can be written as an average over the training set, such as J( ) = θ E( ) ˆ x,y ∼pdata L f , y , ( ( ; ) x θ ) (8.1) where L is the per-example loss function, f(x; θ) is the predicted output when the input is x, p̂data is the empirical distribution. In the supervised learning case, y is the target output. Throughout this chapter, we develop the unregularized supervised case, where the arguments to L are f(x; θ) and y. However, it is trivial to extend this development, for example, to include θ or x as arguments, or to exclude y as arguments, in order to develop various forms of regularization or unsupervised learning. Equation defines an objective function with respect to the training set. We 8.1 would usually prefer to minimize the corresponding objective function where the expectation is taken across the data generating distribution pdata rather than just over the finite training set: J∗ ( ) = θ E( ) x,y ∼pdata L f , y . ( ( ; ) x θ ) (8.2) 8.1.1 Empirical Risk Minimization The goal of a machine learning algorithm is to reduce the expected generalization error given by equation . This quantity is known as the 8.2 risk. We emphasize here that the expectation is taken over the true underlying distribution pdata. If we knew the true distribution pdata(x, y), risk minimization would be an optimization task 275
  • 292. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS solvable by an optimization algorithm. However, when we do not know pdata(x, y) but only have a training set of samples, we have a machine learning problem. The simplest way to convert a machine learning problem back into an op- timization problem is to minimize the expected loss on the training set. This means replacing the true distribution p(x, y) with the empirical distribution p̂(x, y) defined by the training set. We now minimize the empirical risk Ex,y∼p̂data( ) x,y [ ( ( ; ) )] = L f x θ , y 1 m m  i=1 L f ( (x( ) i ; ) θ , y( ) i ) (8.3) where is the number of training examples. m The training process based on minimizing this average training error is known as empirical risk minimization. In this setting, machine learning is still very similar to straightforward optimization. Rather than optimizing the risk directly, we optimize the empirical risk, and hope that the risk decreases significantly as well. A variety of theoretical results establish conditions under which the true risk can be expected to decrease by various amounts. However, empirical risk minimization is prone to overfitting. Models with high capacity can simply memorize the training set. In many cases, empirical risk minimization is not really feasible. The most effective modern optimization algorithms are based on gradient descent, but many useful loss functions, such as 0-1 loss, have no useful derivatives (the derivative is either zero or undefined everywhere). These two problems mean that, in the context of deep learning, we rarely use empirical risk minimization. Instead, we must use a slightly different approach, in which the quantity that we actually optimize is even more different from the quantity that we truly want to optimize. 8.1.2 Surrogate Loss Functions and Early Stopping Sometimes, the loss function we actually care about (say classification error) is not one that can be optimized efficiently. For example, exactly minimizing expected 0-1 loss is typically intractable (exponential in the input dimension), even for a linear classifier (Marcotte and Savard 1992 , ). In such situations, one typically optimizes a surrogate loss function instead, which acts as a proxy but has advantages. For example, the negative log-likelihood of the correct class is typically used as a surrogate for the 0-1 loss. The negative log-likelihood allows the model to estimate the conditional probability of the classes, given the input, and if the model can do that well, then it can pick the classes that yield the least classification error in expectation. 276
  • 293. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS In some cases, a surrogate loss function actually results in being able to learn more. For example, the test set 0-1 loss often continues to decrease for a long time after the training set 0-1 loss has reached zero, when training using the log-likelihood surrogate. This is because even when the expected 0-1 loss is zero, one can improve the robustness of the classifier by further pushing the classes apart from each other, obtaining a more confident and reliable classifier, thus extracting more information from the training data than would have been possible by simply minimizing the average 0-1 loss on the training set. A very important difference between optimization in general and optimization as we use it for training algorithms is that training algorithms do not usually halt at a local minimum. Instead, a machine learning algorithm usually minimizes a surrogate loss function but halts when a convergence criterion based on early stopping (section ) is satisfied. Typically the early stopping criterion is based 7.8 on the true underlying loss function, such as 0-1 loss measured on a validation set, and is designed to cause the algorithm to halt whenever overfitting begins to occur. Training often halts while the surrogate loss function still has large derivatives, which is very different from the pure optimization setting, where an optimization algorithm is considered to have converged when the gradient becomes very small. 8.1.3 Batch and Minibatch Algorithms One aspect of machine learning algorithms that separates them from general optimization algorithms is that the objective function usually decomposes as a sum over the training examples. Optimization algorithms for machine learning typically compute each update to the parameters based on an expected value of the cost function estimated using only a subset of the terms of the full cost function. For example, maximum likelihood estimation problems, when viewed in log space, decompose into a sum over each example: θML = arg max θ m  i=1 log pmodel(x( ) i , y( ) i ; ) θ . (8.4) Maximizing this sum is equivalent to maximizing the expectation over the empirical distribution defined by the training set: J( ) = θ Ex,y∼p̂data log pmodel( ; ) x, y θ . (8.5) Most of the properties of the objective function J used by most of our opti- mization algorithms are also expectations over the training set. For example, the 277
  • 294. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS most commonly used property is the gradient: ∇θ J( ) = θ Ex,y∼p̂data ∇θ log pmodel( ; ) x, y θ . (8.6) Computing this expectation exactly is very expensive because it requires evaluating the model on every example in the entire dataset. In practice, we can compute these expectations by randomly sampling a small number of examples from the dataset, then taking the average over only those examples. Recall that the standard error of the mean (equation ) estimated from 5.46 n samples is given by σ/ √ n, where σ is the true standard deviation of the value of the samples. The denominator of √ n shows that there are less than linear returns to using more examples to estimate the gradient. Compare two hypothetical estimates of the gradient, one based on 100 examples and another based on 10,000 examples. The latter requires 100 times more computation than the former, but reduces the standard error of the mean only by a factor of 10. Most optimization algorithms converge much faster (in terms of total computation, not in terms of number of updates) if they are allowed to rapidly compute approximate estimates of the gradient rather than slowly computing the exact gradient. Another consideration motivating statistical estimation of the gradient from a small number of samples is redundancy in the training set. In the worst case, all m samples in the training set could be identical copies of each other. A sampling- based estimate of the gradient could compute the correct gradient with a single sample, using m times less computation than the naive approach. In practice, we are unlikely to truly encounter this worst-case situation, but we may find large numbers of examples that all make very similar contributions to the gradient. Optimization algorithms that use the entire training set are called batch or deterministic gradient methods, because they process all of the training examples simultaneously in a large batch. This terminology can be somewhat confusing because the word “batch” is also often used to describe the minibatch used by minibatch stochastic gradient descent. Typically the term “batch gradient descent” implies the use of the full training set, while the use of the term “batch” to describe a group of examples does not. For example, it is very common to use the term “batch size” to describe the size of a minibatch. Optimization algorithms that use only a single example at a time are sometimes called stochastic or sometimes online methods. The term online is usually reserved for the case where the examples are drawn from a stream of continually created examples rather than from a fixed-size training set over which several passes are made. Most algorithms used for deep learning fall somewhere in between, using more 278
  • 295. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS than one but less than all of the training examples. These were traditionally called minibatch or minibatch stochastic methods and it is now common to simply call them stochastic methods. The canonical example of a stochastic method is stochastic gradient descent, presented in detail in section . 8.3.1 Minibatch sizes are generally driven by the following factors: • Larger batches provide a more accurate estimate of the gradient, but with less than linear returns. • Multicore architectures are usually underutilized by extremely small batches. This motivates using some absolute minimum batch size, below which there is no reduction in the time to process a minibatch. • If all examples in the batch are to be processed in parallel (as is typically the case), then the amount of memory scales with the batch size. For many hardware setups this is the limiting factor in batch size. • Some kinds of hardware achieve better runtime with specific sizes of arrays. Especially when using GPUs, it is common for power of 2 batch sizes to offer better runtime. Typical power of 2 batch sizes range from 32 to 256, with 16 sometimes being attempted for large models. • Small batches can offer a regularizing effect ( , ), Wilson and Martinez 2003 perhaps due to the noise they add to the learning process. Generalization error is often best for a batch size of 1. Training with such a small batch size might require a small learning rate to maintain stability due to the high variance in the estimate of the gradient. The total runtime can be very high due to the need to make more steps, both because of the reduced learning rate and because it takes more steps to observe the entire training set. Different kinds of algorithms use different kinds of information from the mini- batch in different ways. Some algorithms are more sensitive to sampling error than others, either because they use information that is difficult to estimate accurately with few samples, or because they use information in ways that amplify sampling errors more. Methods that compute updates based only on the gradient g are usually relatively robust and can handle smaller batch sizes like 100. Second-order methods, which use also the Hessian matrix H and compute updates such as H−1g, typically require much larger batch sizes like 10,000. These large batch sizes are required to minimize fluctuations in the estimates of H−1 g. Suppose that H is estimated perfectly but has a poor condition number. Multiplication by 279
  • 296. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS H or its inverse amplifies pre-existing errors, in this case, estimation errors in g. Very small changes in the estimate of g can thus cause large changes in the update H−1 g, even if H were estimated perfectly. Of course, H will be estimated only approximately, so the update H−1 g will contain even more error than we would predict from applying a poorly conditioned operation to the estimate of . g It is also crucial that the minibatches be selected randomly. Computing an unbiased estimate of the expected gradient from a set of samples requires that those samples be independent. We also wish for two subsequent gradient estimates to be independent from each other, so two subsequent minibatches of examples should also be independent from each other. Many datasets are most naturally arranged in a way where successive examples are highly correlated. For example, we might have a dataset of medical data with a long list of blood sample test results. This list might be arranged so that first we have five blood samples taken at different times from the first patient, then we have three blood samples taken from the second patient, then the blood samples from the third patient, and so on. If we were to draw examples in order from this list, then each of our minibatches would be extremely biased, because it would represent primarily one patient out of the many patients in the dataset. In cases such as these where the order of the dataset holds some significance, it is necessary to shuffle the examples before selecting minibatches. For very large datasets, for example datasets containing billions of examples in a data center, it can be impractical to sample examples truly uniformly at random every time we want to construct a minibatch. Fortunately, in practice it is usually sufficient to shuffle the order of the dataset once and then store it in shuffled fashion. This will impose a fixed set of possible minibatches of consecutive examples that all models trained thereafter will use, and each individual model will be forced to reuse this ordering every time it passes through the training data. However, this deviation from true random selection does not seem to have a significant detrimental effect. Failing to ever shuffle the examples in any way can seriously reduce the effectiveness of the algorithm. Many optimization problems in machine learning decompose over examples well enough that we can compute entire separate updates over different examples in parallel. In other words, we can compute the update that minimizes J(X) for one minibatch of examples X at the same time that we compute the update for several other minibatches. Such asynchronous parallel distributed approaches are discussed further in section . 12.1.3 An interesting motivation for minibatch stochastic gradient descent is that it follows the gradient of the true generalization error (equation ) so long as no 8.2 examples are repeated. Most implementations of minibatch stochastic gradient 280
  • 297. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS descent shuffle the dataset once and then pass through it multiple times. On the first pass, each minibatch is used to compute an unbiased estimate of the true generalization error. On the second pass, the estimate becomes biased because it is formed by re-sampling values that have already been used, rather than obtaining new fair samples from the data generating distribution. The fact that stochastic gradient descent minimizes generalization error is easiest to see in the online learning case, where examples or minibatches are drawn from a stream of data. In other words, instead of receiving a fixed-size training set, the learner is similar to a living being who sees a new example at each instant, with every example (x, y) coming from the data generating distribution pdata(x, y). In this scenario, examples are never repeated; every experience is a fair sample from pdata. The equivalence is easiest to derive when both x and y are discrete. In this case, the generalization error (equation ) can be written as a sum 8.2 J∗ ( ) = θ  x  y pdata( ) ( ( ; ) ) x, y L f x θ , y , (8.7) with the exact gradient g = ∇θ J ∗ ( ) = θ  x  y pdata( ) x, y ∇θ L f , y . ( ( ; ) x θ ) (8.8) We have already seen the same fact demonstrated for the log-likelihood in equa- tion and equation ; we observe now that this holds for other functions 8.5 8.6 L besides the likelihood. A similar result can be derived when x and y are continuous, under mild assumptions regarding pdata and . L Hence, we can obtain an unbiased estimator of the exact gradient of the generalization error by sampling a minibatch of examples {x(1) , . . . x( ) m } with cor- responding targets y( ) i from the data generating distribution pdata , and computing the gradient of the loss with respect to the parameters for that minibatch: ĝ = 1 m ∇θ  i L f ( (x( ) i ; ) θ , y( ) i ). (8.9) Updating in the direction of θ ĝ performs SGD on the generalization error. Of course, this interpretation only applies when examples are not reused. Nonetheless, it is usually best to make several passes through the training set, unless the training set is extremely large. When multiple such epochs are used, only the first epoch follows the unbiased gradient of the generalization error, but 281
  • 298. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS of course, the additional epochs usually provide enough benefit due to decreased training error to offset the harm they cause by increasing the gap between training error and test error. With some datasets growing rapidly in size, faster than computing power, it is becoming more common for machine learning applications to use each training example only once or even to make an incomplete pass through the training set. When using an extremely large training set, overfitting is not an issue, so underfitting and computational efficiency become the predominant concerns. See also ( ) for a discussion of the effect of computational Bottou and Bousquet 2008 bottlenecks on generalization error, as the number of training examples grows. 8.2 Challenges in Neural Network Optimization Optimization in general is an extremely difficult task. Traditionally, machine learning has avoided the difficulty of general optimization by carefully designing the objective function and constraints to ensure that the optimization problem is convex. When training neural networks, we must confront the general non-convex case. Even convex optimization is not without its complications. In this section, we summarize several of the most prominent challenges involved in optimization for training deep models. 8.2.1 Ill-Conditioning Some challenges arise even when optimizing convex functions. Of these, the most prominent is ill-conditioning of the Hessian matrix H. This is a very general problem in most numerical optimization, convex or otherwise, and is described in more detail in section . 4.3.1 The ill-conditioning problem is generally believed to be present in neural network training problems. Ill-conditioning can manifest by causing SGD to get “stuck” in the sense that even very small steps increase the cost function. Recall from equation that a second-order Taylor series expansion of the 4.9 cost function predicts that a gradient descent step of will add −g 1 2 2 g Hg g −   g (8.10) to the cost. Ill-conditioning of the gradient becomes a problem when 1 22gHg exceeds g g. To determine whether ill-conditioning is detrimental to a neural network training task, one can monitor the squared gradient norm g g and 282
  • 299. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS −50 0 50 100 150 200 250 Training time (epochs) −2 0 2 4 6 8 10 12 14 16 Gradient norm 0 50 100 150 200 250 Training time (epochs) 0 1 . 0 2 . 0 3 . 0 4 . 0 5 . 0 6 . 0 7 . 0 8 . 0 9 . 1 0 . Classification error rate Figure 8.1: Gradient descent often does not arrive at a critical point of any kind. In this example, the gradient norm increases throughout training of a convolutional network used for object detection. (Left)A scatterplot showing how the norms of individual gradient evaluations are distributed over time. To improve legibility, only one gradient norm is plotted per epoch. The running average of all gradient norms is plotted as a solid curve. The gradient norm clearly increases over time, rather than decreasing as we would expect if the training process converged to a critical point. Despite the increasing (Right) gradient, the training process is reasonably successful. The validation set classification error decreases to a low level. the gHg term. In many cases, the gradient norm does not shrink significantly throughout learning, but the gHg term grows by more than an order of magnitude. The result is that learning becomes very slow despite the presence of a strong gradient because the learning rate must be shrunk to compensate for even stronger curvature. Figure shows an example of the gradient increasing significantly 8.1 during the successful training of a neural network. Though ill-conditioning is present in other settings besides neural network training, some of the techniques used to combat it in other contexts are less applicable to neural networks. For example, Newton’s method is an excellent tool for minimizing convex functions with poorly conditioned Hessian matrices, but in the subsequent sections we will argue that Newton’s method requires significant modification before it can be applied to neural networks. 8.2.2 Local Minima One of the most prominent features of a convex optimization problem is that it can be reduced to the problem of finding a local minimum. Any local minimum is 283
  • 300. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS guaranteed to be a global minimum. Some convex functions have a flat region at the bottom rather than a single global minimum point, but any point within such a flat region is an acceptable solution. When optimizing a convex function, we know that we have reached a good solution if we find a critical point of any kind. With non-convex functions, such as neural nets, it is possible to have many local minima. Indeed, nearly any deep model is essentially guaranteed to have an extremely large number of local minima. However, as we will see, this is not necessarily a major problem. Neural networks and any models with multiple equivalently parametrized latent variables all have multiple local minima because of the model identifiability problem. A model is said to be identifiable if a sufficiently large training set can rule out all but one setting of the model’s parameters. Models with latent variables are often not identifiable because we can obtain equivalent models by exchanging latent variables with each other. For example, we could take a neural network and modify layer 1 by swapping the incoming weight vector for unit i with the incoming weight vector for unit j, then doing the same for the outgoing weight vectors. If we have m layers with n units each, then there are n!m ways of arranging the hidden units. This kind of non-identifiability is known as weight space symmetry. In addition to weight space symmetry, many kinds of neural networks have additional causes of non-identifiability. For example, in any rectified linear or maxout network, we can scale all of the incoming weights and biases of a unit by α if we also scale all of its outgoing weights by 1 α. This means that—if the cost function does not include terms such as weight decay that depend directly on the weights rather than the models’ outputs—every local minimum of a rectified linear or maxout network lies on an (m n × )-dimensional hyperbola of equivalent local minima. These model identifiability issues mean that there can be an extremely large or even uncountably infinite amount of local minima in a neural network cost function. However, all of these local minima arising from non-identifiability are equivalent to each other in cost function value. As a result, these local minima are not a problematic form of non-convexity. Local minima can be problematic if they have high cost in comparison to the global minimum. One can construct small neural networks, even without hidden units, that have local minima with higher cost than the global minimum (Sontag and Sussman 1989 Brady 1989 Gori and Tesi 1992 , ; et al., ; , ). If local minima with high cost are common, this could pose a serious problem for gradient-based optimization algorithms. It remains an open question whether there are many local minima of high cost 284
  • 301. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS for networks of practical interest and whether optimization algorithms encounter them. For many years, most practitioners believed that local minima were a common problem plaguing neural network optimization. Today, that does not appear to be the case. The problem remains an active area of research, but experts now suspect that, for sufficiently large neural networks, most local minima have a low cost function value, and that it is not important to find a true global minimum rather than to find a point in parameter space that has low but not minimal cost ( , ; , ; , ; Saxe et al. 2013 Dauphin et al. 2014 Goodfellow et al. 2015 Choromanska et al., ). 2014 Many practitioners attribute nearly all difficulty with neural network optimiza- tion to local minima. We encourage practitioners to carefully test for specific problems. A test that can rule out local minima as the problem is to plot the norm of the gradient over time. If the norm of the gradient does not shrink to insignificant size, the problem is neither local minima nor any other kind of critical point. This kind of negative test can rule out local minima. In high dimensional spaces, it can be very difficult to positively establish that local minima are the problem. Many structures other than local minima also have small gradients. 8.2.3 Plateaus, Saddle Points and Other Flat Regions For many high-dimensional non-convex functions, local minima (and maxima) are in fact rare compared to another kind of point with zero gradient: a saddle point. Some points around a saddle point have greater cost than the saddle point, while others have a lower cost. At a saddle point, the Hessian matrix has both positive and negative eigenvalues. Points lying along eigenvectors associated with positive eigenvalues have greater cost than the saddle point, while points lying along negative eigenvalues have lower value. We can think of a saddle point as being a local minimum along one cross-section of the cost function and a local maximum along another cross-section. See figure for an illustration. 4.5 Many classes of random functions exhibit the following behavior: in low- dimensional spaces, local minima are common. In higher dimensional spaces, local minima are rare and saddle points are more common. For a function f : Rn → R of this type, the expected ratio of the number of saddle points to local minima grows exponentially with n. To understand the intuition behind this behavior, observe that the Hessian matrix at a local minimum has only positive eigenvalues. The Hessian matrix at a saddle point has a mixture of positive and negative eigenvalues. Imagine that the sign of each eigenvalue is generated by flipping a coin. In a single dimension, it is easy to obtain a local minimum by tossing a coin and getting heads once. In n-dimensional space, it is exponentially unlikely that all n coin tosses will 285
  • 302. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS be heads. See ( ) for a review of the relevant theoretical work. Dauphin et al. 2014 An amazing property of many random functions is that the eigenvalues of the Hessian become more likely to be positive as we reach regions of lower cost. In our coin tossing analogy, this means we are more likely to have our coin come up heads n times if we are at a critical point with low cost. This means that local minima are much more likely to have low cost than high cost. Critical points with high cost are far more likely to be saddle points. Critical points with extremely high cost are more likely to be local maxima. This happens for many classes of random functions. Does it happen for neural networks? ( ) showed theoretically that shallow autoencoders Baldi and Hornik 1989 (feedforward networks trained to copy their input to their output, described in chapter ) with no nonlinearities have global minima and saddle points but no 14 local minima with higher cost than the global minimum. They observed without proof that these results extend to deeper networks without nonlinearities. The output of such networks is a linear function of their input, but they are useful to study as a model of nonlinear neural networks because their loss function is a non-convex function of their parameters. Such networks are essentially just multiple matrices composed together. ( ) provided exact solutions Saxe et al. 2013 to the complete learning dynamics in such networks and showed that learning in these models captures many of the qualitative features observed in the training of deep models with nonlinear activation functions. ( ) showed Dauphin et al. 2014 experimentally that real neural networks also have loss functions that contain very many high-cost saddle points. Choromanska 2014 et al. ( ) provided additional theoretical arguments, showing that another class of high-dimensional random functions related to neural networks does so as well. What are the implications of the proliferation of saddle points for training algo- rithms? For first-order optimization algorithms that use only gradient information, the situation is unclear. The gradient can often become very small near a saddle point. On the other hand, gradient descent empirically seems to be able to escape saddle points in many cases. ( ) provided visualizations of Goodfellow et al. 2015 several learning trajectories of state-of-the-art neural networks, with an example given in figure . These visualizations show a flattening of the cost function near 8.2 a prominent saddle point where the weights are all zero, but they also show the gradient descent trajectory rapidly escaping this region. ( ) Goodfellow et al. 2015 also argue that continuous-time gradient descent may be shown analytically to be repelled from, rather than attracted to, a nearby saddle point, but the situation may be different for more realistic uses of gradient descent. For Newton’s method, it is clear that saddle points constitute a problem. 286
  • 303. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Projection 2 of θ Projection 1 of θ J( ) θ Figure 8.2: A visualization of the cost function of a neural network. Image adapted with permission from Goodfellow 2015 et al. ( ). These visualizations appear similar for feedforward neural networks, convolutional networks, and recurrent networks applied to real object recognition and natural language processing tasks. Surprisingly, these visualizations usually do not show many conspicuous obstacles. Prior to the success of stochastic gradient descent for training very large models beginning in roughly 2012, neural net cost function surfaces were generally believed to have much more non-convex structure than is revealed by these projections. The primary obstacle revealed by this projection is a saddle point of high cost near where the parameters are initialized, but, as indicated by the blue path, the SGD training trajectory escapes this saddle point readily. Most of training time is spent traversing the relatively flat valley of the cost function, which may be due to high noise in the gradient, poor conditioning of the Hessian matrix in this region, or simply the need to circumnavigate the tall “mountain” visible in the figure via an indirect arcing path. 287
  • 304. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Gradient descent is designed to move “downhill” and is not explicitly designed to seek a critical point. Newton’s method, however, is designed to solve for a point where the gradient is zero. Without appropriate modification, it can jump to a saddle point. The proliferation of saddle points in high dimensional spaces presumably explains why second-order methods have not succeeded in replacing gradient descent for neural network training. ( ) introduced a Dauphin et al. 2014 saddle-free Newton method for second-order optimization and showed that it improves significantly over the traditional version. Second-order methods remain difficult to scale to large neural networks, but this saddle-free approach holds promise if it could be scaled. There are other kinds of points with zero gradient besides minima and saddle points. There are also maxima, which are much like saddle points from the perspective of optimization—many algorithms are not attracted to them, but unmodified Newton’s method is. Maxima of many classes of random functions become exponentially rare in high dimensional space, just like minima do. There may also be wide, flat regions of constant value. In these locations, the gradient and also the Hessian are all zero. Such degenerate locations pose major problems for all numerical optimization algorithms. In a convex problem, a wide, flat region must consist entirely of global minima, but in a general optimization problem, such a region could correspond to a high value of the objective function. 8.2.4 Cliffs and Exploding Gradients Neural networks with many layers often have extremely steep regions resembling cliffs, as illustrated in figure . These result from the multiplication of several 8.3 large weights together. On the face of an extremely steep cliff structure, the gradient update step can move the parameters extremely far, usually jumping off of the cliff structure altogether. 288
  • 305. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS        Figure 8.3: The objective function for highly nonlinear deep neural networks or for recurrent neural networks often contains sharp nonlinearities in parameter space resulting from the multiplication of several parameters. These nonlinearities give rise to very high derivatives in some places. When the parameters get close to such a cliff region, a gradient descent update can catapult the parameters very far, possibly losing most of the optimization work that had been done. Figure adapted with permission from Pascanu et al. ( ). 2013 The cliff can be dangerous whether we approach it from above or from below, but fortunately its most serious consequences can be avoided using the gradient clipping heuristic described in section . The basic idea is to recall that 10.11.1 the gradient does not specify the optimal step size, but only the optimal direction within an infinitesimal region. When the traditional gradient descent algorithm proposes to make a very large step, the gradient clipping heuristic intervenes to reduce the step size to be small enough that it is less likely to go outside the region where the gradient indicates the direction of approximately steepest descent. Cliff structures are most common in the cost functions for recurrent neural networks, because such models involve a multiplication of many factors, with one factor for each time step. Long temporal sequences thus incur an extreme amount of multiplication. 8.2.5 Long-Term Dependencies Another difficulty that neural network optimization algorithms must overcome arises when the computational graph becomes extremely deep. Feedforward networks with many layers have such deep computational graphs. So do recurrent networks, described in chapter , which construct very deep computational graphs 10 289
  • 306. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS by repeatedly applying the same operation at each time step of a long temporal sequence. Repeated application of the same parameters gives rise to especially pronounced difficulties. For example, suppose that a computational graph contains a path that consists of repeatedly multiplying by a matrix W. After t steps, this is equivalent to mul- tiplying by Wt . Suppose that W has an eigendecomposition W = V diag(λ)V −1. In this simple case, it is straightforward to see that W t =  V λ V diag( ) −1 t = ( ) V diag λ t V −1 . (8.11) Any eigenvalues λi that are not near an absolute value of will either explode if they 1 are greater than in magnitude or vanish if they are less than in magnitude. The 1 1 vanishing and exploding gradient problem refers to the fact that gradients through such a graph are also scaled according to diag(λ )t. Vanishing gradients make it difficult to know which direction the parameters should move to improve the cost function, while exploding gradients can make learning unstable. The cliff structures described earlier that motivate gradient clipping are an example of the exploding gradient phenomenon. The repeated multiplication by W at each time step described here is very similar to the power method algorithm used to find the largest eigenvalue of a matrix W and the corresponding eigenvector. From this point of view it is not surprising that xWt will eventually discard all components of x that are orthogonal to the principal eigenvector of . W Recurrent networks use the same matrix W at each time step, but feedforward networks do not, so even very deep feedforward networks can largely avoid the vanishing and exploding gradient problem ( , ). Sussillo 2014 We defer a further discussion of the challenges of training recurrent networks until section , after recurrent networks have been described in more detail. 10.7 8.2.6 Inexact Gradients Most optimization algorithms are designed with the assumption that we have access to the exact gradient or Hessian matrix. In practice, we usually only have a noisy or even biased estimate of these quantities. Nearly every deep learning algorithm relies on sampling-based estimates at least insofar as using a minibatch of training examples to compute the gradient. In other cases, the objective function we want to minimize is actually intractable. When the objective function is intractable, typically its gradient is intractable as well. In such cases we can only approximate the gradient. These issues mostly arise 290
  • 307. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS with the more advanced models in part . For example, contrastive divergence III gives a technique for approximating the gradient of the intractable log-likelihood of a Boltzmann machine. Various neural network optimization algorithms are designed to account for imperfections in the gradient estimate. One can also avoid the problem by choosing a surrogate loss function that is easier to approximate than the true loss. 8.2.7 Poor Correspondence between Local and Global Structure Many of the problems we have discussed so far correspond to properties of the loss function at a single point—it can be difficult to make a single step if J(θ) is poorly conditioned at the current point θ, or if θ lies on a cliff, or if θ is a saddle point hiding the opportunity to make progress downhill from the gradient. It is possible to overcome all of these problems at a single point and still perform poorly if the direction that results in the most improvement locally does not point toward distant regions of much lower cost. Goodfellow 2015 et al. ( ) argue that much of the runtime of training is due to the length of the trajectory needed to arrive at the solution. Figure shows that 8.2 the learning trajectory spends most of its time tracing out a wide arc around a mountain-shaped structure. Much of research into the difficulties of optimization has focused on whether training arrives at a global minimum, a local minimum, or a saddle point, but in practice neural networks do not arrive at a critical point of any kind. Figure 8.1 shows that neural networks often do not arrive at a region of small gradient. Indeed, such critical points do not even necessarily exist. For example, the loss function − log p(y | x; θ) can lack a global minimum point and instead asymptotically approach some value as the model becomes more confident. For a classifier with discrete y and p(y | x) provided by a softmax, the negative log-likelihood can become arbitrarily close to zero if the model is able to correctly classify every example in the training set, but it is impossible to actually reach the value of zero. Likewise, a model of real values p(y | x) = N(y;f(θ), β−1 ) can have negative log-likelihood that asymptotes to negative infinity—if f(θ) is able to correctly predict the value of all training set y targets, the learning algorithm will increase β without bound. See figure for an example of a failure of local optimization to 8.4 find a good cost function value even in the absence of any local minima or saddle points. Future research will need to develop further understanding of the factors that influence the length of the learning trajectory and better characterize the outcome 291
  • 308. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS θ J( ) θ Figure 8.4: Optimization based on local downhill moves can fail if the local surface does not point toward the global solution. Here we provide an example of how this can occur, even if there are no saddle points and no local minima. This example cost function contains only asymptotes toward low values, not minima. The main cause of difficulty in this case is being initialized on the wrong side of the “mountain” and not being able to traverse it. In higher dimensional space, learning algorithms can often circumnavigate such mountains but the trajectory associated with doing so may be long and result in excessive training time, as illustrated in figure . 8.2 of the process. Many existing research directions are aimed at finding good initial points for problems that have difficult global structure, rather than developing algorithms that use non-local moves. Gradient descent and essentially all learning algorithms that are effective for training neural networks are based on making small, local moves. The previous sections have primarily focused on how the correct direction of these local moves can be difficult to compute. We may be able to compute some properties of the objective function, such as its gradient, only approximately, with bias or variance in our estimate of the correct direction. In these cases, local descent may or may not define a reasonably short path to a valid solution, but we are not actually able to follow the local descent path. The objective function may have issues such as poor conditioning or discontinuous gradients, causing the region where the gradient provides a good model of the objective function to be very small. In these cases, local descent with steps of size  may define a reasonably short path to the solution, but we are only able to compute the local descent direction with steps of size δ   . In these cases, local descent may or may not define a path to the solution, but the path contains many steps, so following the path incurs a 292
  • 309. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS high computational cost. Sometimes local information provides us no guide, when the function has a wide flat region, or if we manage to land exactly on a critical point (usually this latter scenario only happens to methods that solve explicitly for critical points, such as Newton’s method). In these cases, local descent does not define a path to a solution at all. In other cases, local moves can be too greedy and lead us along a path that moves downhill but away from any solution, as in figure , or along an unnecessarily long trajectory to the solution, as in figure . 8.4 8.2 Currently, we do not understand which of these problems are most relevant to making neural network optimization difficult, and this is an active area of research. Regardless of which of these problems are most significant, all of them might be avoided if there exists a region of space connected reasonably directly to a solution by a path that local descent can follow, and if we are able to initialize learning within that well-behaved region. This last view suggests research into choosing good initial points for traditional optimization algorithms to use. 8.2.8 Theoretical Limits of Optimization Several theoretical results show that there are limits on the performance of any optimization algorithm we might design for neural networks (Blum and Rivest, 1992 Judd 1989 Wolpert and MacReady 1997 ; , ; , ). Typically these results have little bearing on the use of neural networks in practice. Some theoretical results apply only to the case where the units of a neural network output discrete values. However, most neural network units output smoothly increasing values that make optimization via local search feasible. Some theoretical results show that there exist problem classes that are intractable, but it can be difficult to tell whether a particular problem falls into that class. Other results show that finding a solution for a network of a given size is intractable, but in practice we can find a solution easily by using a larger network for which many more parameter settings correspond to an acceptable solution. Moreover, in the context of neural network training, we usually do not care about finding the exact minimum of a function, but seek only to reduce its value sufficiently to obtain good generalization error. Theoretical analysis of whether an optimization algorithm can accomplish this goal is extremely difficult. Developing more realistic bounds on the performance of optimization algorithms therefore remains an important goal for machine learning research. 293
  • 310. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 8.3 Basic Algorithms We have previously introduced the gradient descent (section ) algorithm that 4.3 follows the gradient of an entire training set downhill. This may be accelerated considerably by using stochastic gradient descent to follow the gradient of randomly selected minibatches downhill, as discussed in section and section . 5.9 8.1.3 8.3.1 Stochastic Gradient Descent Stochastic gradient descent (SGD) and its variants are probably the most used optimization algorithms for machine learning in general and for deep learning in particular. As discussed in section , it is possible to obtain an unbiased 8.1.3 estimate of the gradient by taking the average gradient on a minibatch of m examples drawn i.i.d from the data generating distribution. Algorithm shows how to follow this estimate of the gradient downhill. 8.1 Algorithm 8.1 Stochastic gradient descent (SGD) update at training iteration k Require: Learning rate k. Require: Initial parameter θ while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1) , . . . , x( ) m } with corresponding targets y( ) i . Compute gradient estimate: ĝ ← + 1 m ∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Apply update: θ θ ← − ĝ end while A crucial parameter for the SGD algorithm is the learning rate. Previously, we have described SGD as using a fixed learning rate . In practice, it is necessary to gradually decrease the learning rate over time, so we now denote the learning rate at iteration as k k. This is because the SGD gradient estimator introduces a source of noise (the random sampling of m training examples) that does not vanish even when we arrive at a minimum. By comparison, the true gradient of the total cost function becomes small and then 0 when we approach and reach a minimum using batch gradient descent, so batch gradient descent can use a fixed learning rate. A sufficient condition to guarantee convergence of SGD is that ∞  k=1 k = and ∞, (8.12) 294
  • 311. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS ∞  k=1 2 k < . ∞ (8.13) In practice, it is common to decay the learning rate linearly until iteration : τ k = (1 ) − α 0 + ατ (8.14) with α = k τ . After iteration , it is common to leave constant. τ  The learning rate may be chosen by trial and error, but it is usually best to choose it by monitoring learning curves that plot the objective function as a function of time. This is more of an art than a science, and most guidance on this subject should be regarded with some skepticism. When using the linear schedule, the parameters to choose are 0, τ , and τ. Usually τ may be set to the number of iterations required to make a few hundred passes through the training set. Usually τ should be set to roughly the value of 1% 0. The main question is how to set 0. If it is too large, the learning curve will show violent oscillations, with the cost function often increasing significantly. Gentle oscillations are fine, especially if training with a stochastic cost function such as the cost function arising from the use of dropout. If the learning rate is too low, learning proceeds slowly, and if the initial learning rate is too low, learning may become stuck with a high cost value. Typically, the optimal initial learning rate, in terms of total training time and the final cost value, is higher than the learning rate that yields the best performance after the first 100 iterations or so. Therefore, it is usually best to monitor the first several iterations and use a learning rate that is higher than the best-performing learning rate at this time, but not so high that it causes severe instability. The most important property of SGD and related minibatch or online gradient- based optimization is that computation time per update does not grow with the number of training examples. This allows convergence even when the number of training examples becomes very large. For a large enough dataset, SGD may converge to within some fixed tolerance of its final test set error before it has processed the entire training set. To study the convergence rate of an optimization algorithm it is common to measure the excess error J(θ) − minθ J(θ), which is the amount that the current cost function exceeds the minimum possible cost. When SGD is applied to a convex problem, the excess error is O( 1 √ k ) after k iterations, while in the strongly convex case it is O( 1 k). These bounds cannot be improved unless extra conditions are assumed. Batch gradient descent enjoys better convergence rates than stochastic gradient descent in theory. However, the Cramér-Rao bound ( , ; , Cramér 1946 Rao 1945) states that generalization error cannot decrease faster than O(1 k ). Bottou 295
  • 312. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS and Bousquet 2008 ( ) argue that it therefore may not be worthwhile to pursue an optimization algorithm that converges faster than O( 1 k ) for machine learning tasks—faster convergence presumably corresponds to overfitting. Moreover, the asymptotic analysis obscures many advantages that stochastic gradient descent has after a small number of steps. With large datasets, the ability of SGD to make rapid initial progress while evaluating the gradient for only very few examples outweighs its slow asymptotic convergence. Most of the algorithms described in the remainder of this chapter achieve benefits that matter in practice but are lost in the constant factors obscured by the O( 1 k) asymptotic analysis. One can also trade off the benefits of both batch and stochastic gradient descent by gradually increasing the minibatch size during the course of learning. For more information on SGD, see ( ). Bottou 1998 8.3.2 Momentum While stochastic gradient descent remains a very popular optimization strategy, learning with it can sometimes be slow. The method of momentum (Polyak 1964 , ) is designed to accelerate learning, especially in the face of high curvature, small but consistent gradients, or noisy gradients. The momentum algorithm accumulates an exponentially decaying moving average of past gradients and continues to move in their direction. The effect of momentum is illustrated in figure . 8.5 Formally, the momentum algorithm introduces a variable v that plays the role of velocity—it is the direction and speed at which the parameters move through parameter space. The velocity is set to an exponentially decaying average of the negative gradient. The name momentum derives from a physical analogy, in which the negative gradient is a force moving a particle through parameter space, according to Newton’s laws of motion. Momentum in physics is mass times velocity. In the momentum learning algorithm, we assume unit mass, so the velocity vectorv may also be regarded as the momentum of the particle. A hyperparameter α ∈ [0,1) determines how quickly the contributions of previous gradients exponentially decay. The update rule is given by: v v ← α − ∇  θ  1 m m  i=1 L( ( f x( ) i ; ) θ , y( ) i )  , (8.15) θ θ v ← + . (8.16) The velocity v accumulates the gradient elements ∇θ  1 m m i=1 L( ( f x( ) i ; ) θ , y( ) i )  . The larger α is relative to , the more previous gradients affect the current direction. The SGD algorithm with momentum is given in algorithm . 8.2 296
  • 313. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS − − − 30 20 10 0 10 20 −30 −20 −10 0 10 20 Figure 8.5: Momentum aims primarily to solve two problems: poor conditioning of the Hessian matrix and variance in the stochastic gradient. Here, we illustrate how momentum overcomes the first of these two problems. The contour lines depict a quadratic loss function with a poorly conditioned Hessian matrix. The red path cutting across the contours indicates the path followed by the momentum learning rule as it minimizes this function. At each step along the way, we draw an arrow indicating the step that gradient descent would take at that point. We can see that a poorly conditioned quadratic objective looks like a long, narrow valley or canyon with steep sides. Momentum correctly traverses the canyon lengthwise, while gradient steps waste time moving back and forth across the narrow axis of the canyon. Compare also figure , which shows the behavior of gradient 4.6 descent without momentum. 297
  • 314. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Previously, the size of the step was simply the norm of the gradient multiplied by the learning rate. Now, the size of the step depends on how large and how aligned a sequence of gradients are. The step size is largest when many successive gradients point in exactly the same direction. If the momentum algorithm always observes gradient g, then it will accelerate in the direction of −g, until reaching a terminal velocity where the size of each step is || || g 1 − α . (8.17) It is thus helpful to think of the momentum hyperparameter in terms of 1 1−α. For example, α = .9 corresponds to multiplying the maximum speed by relative to 10 the gradient descent algorithm. Common values of α used in practice include .5, .9, and .99. Like the learning rate, α may also be adapted over time. Typically it begins with a small value and is later raised. It is less important to adapt α over time than to shrink  over time. Algorithm 8.2 Stochastic gradient descent (SGD) with momentum Require: Learning rate , momentum parameter .  α Require: Initial parameter , initial velocity . θ v while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1), . . . , x( ) m } with corresponding targets y( ) i . Compute gradient estimate: g ← 1 m ∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Compute velocity update: v v g ← α −  Apply update: θ θ v ← + end while We can view the momentum algorithm as simulating a particle subject to continuous-time Newtonian dynamics. The physical analogy can help to build intuition for how the momentum and gradient descent algorithms behave. The position of the particle at any point in time is given by θ(t). The particle experiences net force . This force causes the particle to accelerate: f( ) t f( ) = t ∂2 ∂t2 θ( ) t . (8.18) Rather than viewing this as a second-order differential equation of the position, we can introduce the variable v(t) representing the velocity of the particle at time t and rewrite the Newtonian dynamics as a first-order differential equation: v( ) = t ∂ ∂t θ( ) t , (8.19) 298
  • 315. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS f( ) = t ∂ ∂t v( ) t . (8.20) The momentum algorithm then consists of solving the differential equations via numerical simulation. A simple numerical method for solving differential equations is Euler’s method, which simply consists of simulating the dynamics defined by the equation by taking small, finite steps in the direction of each gradient. This explains the basic form of the momentum update, but what specifically are the forces? One force is proportional to the negative gradient of the cost function: −∇θ J (θ). This force pushes the particle downhill along the cost function surface. The gradient descent algorithm would simply take a single step based on each gradient, but the Newtonian scenario used by the momentum algorithm instead uses this force to alter the velocity of the particle. We can think of the particle as being like a hockey puck sliding down an icy surface. Whenever it descends a steep part of the surface, it gathers speed and continues sliding in that direction until it begins to go uphill again. One other force is necessary. If the only force is the gradient of the cost function, then the particle might never come to rest. Imagine a hockey puck sliding down one side of a valley and straight up the other side, oscillating back and forth forever, assuming the ice is perfectly frictionless. To resolve this problem, we add one other force, proportional to −v(t). In physics terminology, this force corresponds to viscous drag, as if the particle must push through a resistant medium such as syrup. This causes the particle to gradually lose energy over time and eventually converge to a local minimum. Why do we use −v(t) and viscous drag in particular? Part of the reason to use −v(t) is mathematical convenience—an integer power of the velocity is easy to work with. However, other physical systems have other kinds of drag based on other integer powers of the velocity. For example, a particle traveling through the air experiences turbulent drag, with force proportional to the square of the velocity, while a particle moving along the ground experiences dry friction, with a force of constant magnitude. We can reject each of these options. Turbulent drag, proportional to the square of the velocity, becomes very weak when the velocity is small. It is not powerful enough to force the particle to come to rest. A particle with a non-zero initial velocity that experiences only the force of turbulent drag will move away from its initial position forever, with the distance from the starting point growing like O(log t). We must therefore use a lower power of the velocity. If we use a power of zero, representing dry friction, then the force is too strong. When the force due to the gradient of the cost function is small but non-zero, the constant force due to friction can cause the particle to come to rest before reaching a local minimum. Viscous drag avoids both of these problems—it is weak enough 299
  • 316. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS that the gradient can continue to cause motion until a minimum is reached, but strong enough to prevent motion if the gradient does not justify moving. 8.3.3 Nesterov Momentum Sutskever 2013 et al. ( ) introduced a variant of the momentum algorithm that was inspired by Nesterov’s accelerated gradient method ( , , ). The Nesterov 1983 2004 update rules in this case are given by: v v ← α − ∇  θ  1 m m  i=1 L  f x ( ( ) i ; + ) θ αv , y( ) i   , (8.21) θ θ v ← + , (8.22) where the parameters α and  play a similar role as in the standard momentum method. The difference between Nesterov momentum and standard momentum is where the gradient is evaluated. With Nesterov momentum the gradient is evaluated after the current velocity is applied. Thus one can interpret Nesterov momentum as attempting to add a correction factor to the standard method of momentum. The complete Nesterov momentum algorithm is presented in algorithm . 8.3 In the convex batch gradient case, Nesterov momentum brings the rate of convergence of the excess error from O(1/k) (after k steps) to O(1/k2) as shown by Nesterov 1983 ( ). Unfortunately, in the stochastic gradient case, Nesterov momentum does not improve the rate of convergence. Algorithm 8.3 Stochastic gradient descent (SGD) with Nesterov momentum Require: Learning rate , momentum parameter .  α Require: Initial parameter , initial velocity . θ v while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1), . . . , x( ) m } with corresponding labels y( ) i . Apply interim update: θ̃ θ v ← + α Compute gradient (at interim point): g ← 1 m∇θ̃  i L f ( (x( ) i ;θ̃ y ), ( ) i ) Compute velocity update: v v g ← α −  Apply update: θ θ v ← + end while 300
  • 317. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 8.4 Parameter Initialization Strategies Some optimization algorithms are not iterative by nature and simply solve for a solution point. Other optimization algorithms are iterative by nature but, when applied to the right class of optimization problems, converge to acceptable solutions in an acceptable amount of time regardless of initialization. Deep learning training algorithms usually do not have either of these luxuries. Training algorithms for deep learning models are usually iterative in nature and thus require the user to specify some initial point from which to begin the iterations. Moreover, training deep models is a sufficiently difficult task that most algorithms are strongly affected by the choice of initialization. The initial point can determine whether the algorithm converges at all, with some initial points being so unstable that the algorithm encounters numerical difficulties and fails altogether. When learning does converge, the initial point can determine how quickly learning converges and whether it converges to a point with high or low cost. Also, points of comparable cost can have wildly varying generalization error, and the initial point can affect the generalization as well. Modern initialization strategies are simple and heuristic. Designing improved initialization strategies is a difficult task because neural network optimization is not yet well understood. Most initialization strategies are based on achieving some nice properties when the network is initialized. However, we do not have a good understanding of which of these properties are preserved under which circumstances after learning begins to proceed. A further difficulty is that some initial points may be beneficial from the viewpoint of optimization but detrimental from the viewpoint of generalization. Our understanding of how the initial point affects generalization is especially primitive, offering little to no guidance for how to select the initial point. Perhaps the only property known with complete certainty is that the initial parameters need to “break symmetry” between different units. If two hidden units with the same activation function are connected to the same inputs, then these units must have different initial parameters. If they have the same initial parameters, then a deterministic learning algorithm applied to a deterministic cost and model will constantly update both of these units in the same way. Even if the model or training algorithm is capable of using stochasticity to compute different updates for different units (for example, if one trains with dropout), it is usually best to initialize each unit to compute a different function from all of the other units. This may help to make sure that no input patterns are lost in the null space of forward propagation and no gradient patterns are lost in the null space of back-propagation. The goal of having each unit compute a different function 301
  • 318. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS motivates random initialization of the parameters. We could explicitly search for a large set of basis functions that are all mutually different from each other, but this often incurs a noticeable computational cost. For example, if we have at most as many outputs as inputs, we could use Gram-Schmidt orthogonalization on an initial weight matrix, and be guaranteed that each unit computes a very different function from each other unit. Random initialization from a high-entropy distribution over a high-dimensional space is computationally cheaper and unlikely to assign any units to compute the same function as each other. Typically, we set the biases for each unit to heuristically chosen constants, and initialize only the weights randomly. Extra parameters, for example, parameters encoding the conditional variance of a prediction, are usually set to heuristically chosen constants much like the biases are. We almost always initialize all the weights in the model to values drawn randomly from a Gaussian or uniform distribution. The choice of Gaussian or uniform distribution does not seem to matter very much, but has not been exhaustively studied. The scale of the initial distribution, however, does have a large effect on both the outcome of the optimization procedure and on the ability of the network to generalize. Larger initial weights will yield a stronger symmetry breaking effect, helping to avoid redundant units. They also help to avoid losing signal during forward or back-propagation through the linear component of each layer—larger values in the matrix result in larger outputs of matrix multiplication. Initial weights that are too large may, however, result in exploding values during forward propagation or back-propagation. In recurrent networks, large weights can also result in chaos (such extreme sensitivity to small perturbations of the input that the behavior of the deterministic forward propagation procedure appears random). To some extent, the exploding gradient problem can be mitigated by gradient clipping (thresholding the values of the gradients before performing a gradient descent step). Large weights may also result in extreme values that cause the activation function to saturate, causing complete loss of gradient through saturated units. These competing factors determine the ideal initial scale of the weights. The perspectives of regularization and optimization can give very different insights into how we should initialize a network. The optimization perspective suggests that the weights should be large enough to propagate information success- fully, but some regularization concerns encourage making them smaller. The use of an optimization algorithm such as stochastic gradient descent that makes small incremental changes to the weights and tends to halt in areas that are nearer to the initial parameters (whether due to getting stuck in a region of low gradient, or 302
  • 319. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS due to triggering some early stopping criterion based on overfitting) expresses a prior that the final parameters should be close to the initial parameters. Recall from section that gradient descent with early stopping is equivalent to weight 7.8 decay for some models. In the general case, gradient descent with early stopping is not the same as weight decay, but does provide a loose analogy for thinking about the effect of initialization. We can think of initializing the parameters θ to θ0 as being similar to imposing a Gaussian prior p(θ) with mean θ0 . From this point of view, it makes sense to choose θ0 to be near 0. This prior says that it is more likely that units do not interact with each other than that they do interact. Units interact only if the likelihood term of the objective function expresses a strong preference for them to interact. On the other hand, if we initialize θ0 to large values, then our prior specifies which units should interact with each other, and how they should interact. Some heuristics are available for choosing the initial scale of the weights. One heuristic is to initialize the weights of a fully connected layer with m inputs and n outputs by sampling each weight from U(− 1 √ m , 1 √ m ), while Glorot and Bengio ( ) suggest using the 2010 normalized initialization Wi,j ∼ U  −  6 m n + ,  6 m n +  . (8.23) This latter heuristic is designed to compromise between the goal of initializing all layers to have the same activation variance and the goal of initializing all layers to have the same gradient variance. The formula is derived using the assumption that the network consists only of a chain of matrix multiplications, with no nonlinearities. Real neural networks obviously violate this assumption, but many strategies designed for the linear model perform reasonably well on its nonlinear counterparts. Saxe 2013 et al. ( ) recommend initializing to random orthogonal matrices, with a carefully chosen scaling or gain factor g that accounts for the nonlinearity applied at each layer. They derive specific values of the scaling factor for different types of nonlinear activation functions. This initialization scheme is also motivated by a model of a deep network as a sequence of matrix multiplies without nonlinearities. Under such a model, this initialization scheme guarantees that the total number of training iterations required to reach convergence is independent of depth. Increasing the scaling factor g pushes the network toward the regime where activations increase in norm as they propagate forward through the network and gradients increase in norm as they propagate backward. ( ) showed Sussillo 2014 that setting the gain factor correctly is sufficient to train networks as deep as 303
  • 320. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 1,000 layers, without needing to use orthogonal initializations. A key insight of this approach is that in feedforward networks, activations and gradients can grow or shrink on each step of forward or back-propagation, following a random walk behavior. This is because feedforward networks use a different weight matrix at each layer. If this random walk is tuned to preserve norms, then feedforward networks can mostly avoid the vanishing and exploding gradients problem that arises when the same weight matrix is used at each step, described in section . 8.2.5 Unfortunately, these optimal criteria for initial weights often do not lead to optimal performance. This may be for three different reasons. First, we may be using the wrong criteria—it may not actually be beneficial to preserve the norm of a signal throughout the entire network. Second, the properties imposed at initialization may not persist after learning has begun to proceed. Third, the criteria might succeed at improving the speed of optimization but inadvertently increase generalization error. In practice, we usually need to treat the scale of the weights as a hyperparameter whose optimal value lies somewhere roughly near but not exactly equal to the theoretical predictions. One drawback to scaling rules that set all of the initial weights to have the same standard deviation, such as 1 √ m , is that every individual weight becomes extremely small when the layers become large. ( ) introduced an Martens 2010 alternative initialization scheme called sparse initialization in which each unit is initialized to have exactly k non-zero weights. The idea is to keep the total amount of input to the unit independent from the number of inputs m without making the magnitude of individual weight elements shrink with m. Sparse initialization helps to achieve more diversity among the units at initialization time. However, it also imposes a very strong prior on the weights that are chosen to have large Gaussian values. Because it takes a long time for gradient descent to shrink “incorrect” large values, this initialization scheme can cause problems for units such as maxout units that have several filters that must be carefully coordinated with each other. When computational resources allow it, it is usually a good idea to treat the initial scale of the weights for each layer as a hyperparameter, and to choose these scales using a hyperparameter search algorithm described in section , such 11.4.2 as random search. The choice of whether to use dense or sparse initialization can also be made a hyperparameter. Alternately, one can manually search for the best initial scales. A good rule of thumb for choosing the initial scales is to look at the range or standard deviation of activations or gradients on a single minibatch of data. If the weights are too small, the range of activations across the minibatch will shrink as the activations propagate forward through the network. By repeatedly identifying the first layer with unacceptably small activations and 304
  • 321. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS increasing its weights, it is possible to eventually obtain a network with reasonable initial activations throughout. If learning is still too slow at this point, it can be useful to look at the range or standard deviation of the gradients as well as the activations. This procedure can in principle be automated and is generally less computationally costly than hyperparameter optimization based on validation set error because it is based on feedback from the behavior of the initial model on a single batch of data, rather than on feedback from a trained model on the validation set. While long used heuristically, this protocol has recently been specified more formally and studied by ( ). Mishkin and Matas 2015 So far we have focused on the initialization of the weights. Fortunately, initialization of other parameters is typically easier. The approach for setting the biases must be coordinated with the approach for settings the weights. Setting the biases to zero is compatible with most weight initialization schemes. There are a few situations where we may set some biases to non-zero values: • If a bias is for an output unit, then it is often beneficial to initialize the bias to obtain the right marginal statistics of the output. To do this, we assume that the initial weights are small enough that the output of the unit is determined only by the bias. This justifies setting the bias to the inverse of the activation function applied to the marginal statistics of the output in the training set. For example, if the output is a distribution over classes and this distribution is a highly skewed distribution with the marginal probability of class i given by element ci of some vector c, then we can set the bias vector b by solving the equation softmax(b) = c. This applies not only to classifiers but also to models we will encounter in Part , such as autoencoders and Boltzmann III machines. These models have layers whose output should resemble the input data x, and it can be very helpful to initialize the biases of such layers to match the marginal distribution over . x • Sometimes we may want to choose the bias to avoid causing too much saturation at initialization. For example, we may set the bias of a ReLU hidden unit to 0.1 rather than 0 to avoid saturating the ReLU at initialization. This approach is not compatible with weight initialization schemes that do not expect strong input from the biases though. For example, it is not recommended for use with random walk initialization ( , ). Sussillo 2014 • Sometimes a unit controls whether other units are able to participate in a function. In such situations, we have a unit with output u and another unit h ∈ [0, 1], and they are multiplied together to produce an output uh. We 305
  • 322. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS can view h as a gate that determines whether uh u ≈ or uh ≈ 0. In these situations, we want to set the bias for h so that h ≈ 1 most of the time at initialization. Otherwise u does not have a chance to learn. For example, Jozefowicz 2015 et al. ( ) advocate setting the bias to for the forget gate of 1 the LSTM model, described in section . 10.10 Another common type of parameter is a variance or precision parameter. For example, we can perform linear regression with a conditional variance estimate using the model p y y ( | N x) = ( | wT x + 1 ) b, /β (8.24) where β is a precision parameter. We can usually initialize variance or precision parameters to 1 safely. Another approach is to assume the initial weights are close enough to zero that the biases may be set while ignoring the effect of the weights, then set the biases to produce the correct marginal mean of the output, and set the variance parameters to the marginal variance of the output in the training set. Besides these simple constant or random methods of initializing model parame- ters, it is possible to initialize model parameters using machine learning. A common strategy discussed in part of this book is to initialize a supervised model with III the parameters learned by an unsupervised model trained on the same inputs. One can also perform supervised training on a related task. Even performing supervised training on an unrelated task can sometimes yield an initialization that offers faster convergence than a random initialization. Some of these initialization strategies may yield faster convergence and better generalization because they encode information about the distribution in the initial parameters of the model. Others apparently perform well primarily because they set the parameters to have the right scale or set different units to compute different functions from each other. 8.5 Algorithms with Adaptive Learning Rates Neural network researchers have long realized that the learning rate was reliably one of the hyperparameters that is the most difficult to set because it has a significant impact on model performance. As we have discussed in sections and , the 4.3 8.2 cost is often highly sensitive to some directions in parameter space and insensitive to others. The momentum algorithm can mitigate these issues somewhat, but does so at the expense of introducing another hyperparameter. In the face of this, it is natural to ask if there is another way. If we believe that the directions of sensitivity are somewhat axis-aligned, it can make sense to use a separate learning 306
  • 323. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS rate for each parameter, and automatically adapt these learning rates throughout the course of learning. The algorithm ( , ) is an early heuristic approach delta-bar-delta Jacobs 1988 to adapting individual learning rates for model parameters during training. The approach is based on a simple idea: if the partial derivative of the loss, with respect to a given model parameter, remains the same sign, then the learning rate should increase. If the partial derivative with respect to that parameter changes sign, then the learning rate should decrease. Of course, this kind of rule can only be applied to full batch optimization. More recently, a number of incremental (or mini-batch-based) methods have been introduced that adapt the learning rates of model parameters. This section will briefly review a few of these algorithms. 8.5.1 AdaGrad The AdaGrad algorithm, shown in algorithm , individually adapts the learning 8.4 rates of all model parameters by scaling them inversely proportional to the square root of the sum of all of their historical squared values ( , ). The Duchi et al. 2011 parameters with the largest partial derivative of the loss have a correspondingly rapid decrease in their learning rate, while parameters with small partial derivatives have a relatively small decrease in their learning rate. The net effect is greater progress in the more gently sloped directions of parameter space. In the context of convex optimization, the AdaGrad algorithm enjoys some desirable theoretical properties. However, empirically it has been found that—for training deep neural network models—the accumulation of squared gradients from the beginning of training can result in a premature and excessive decrease in the effective learning rate. AdaGrad performs well for some but not all deep learning models. 8.5.2 RMSProp The RMSProp algorithm ( , ) modifies AdaGrad to perform better in Hinton 2012 the non-convex setting by changing the gradient accumulation into an exponentially weighted moving average. AdaGrad is designed to converge rapidly when applied to a convex function. When applied to a non-convex function to train a neural network, the learning trajectory may pass through many different structures and eventually arrive at a region that is a locally convex bowl. AdaGrad shrinks the learning rate according to the entire history of the squared gradient and may 307
  • 324. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Algorithm 8.4 The AdaGrad algorithm Require: Global learning rate  Require: Initial parameter θ Require: Small constant , perhaps δ 10−7, for numerical stability Initialize gradient accumulation variable r = 0 while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1) , . . . , x( ) m } with corresponding targets y( ) i . Compute gradient: g ← 1 m∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Accumulate squared gradient: r r g g ← +  Compute update: ∆θ ← −  δ+ √ r  g. (Division and square root applied element-wise) Apply update: θ θ θ ← + ∆ end while have made the learning rate too small before arriving at such a convex structure. RMSProp uses an exponentially decaying average to discard history from the extreme past so that it can converge rapidly after finding a convex bowl, as if it were an instance of the AdaGrad algorithm initialized within that bowl. RMSProp is shown in its standard form in algorithm and combined with 8.5 Nesterov momentum in algorithm . Compared to AdaGrad, the use of the 8.6 moving average introduces a new hyperparameter, ρ, that controls the length scale of the moving average. Empirically, RMSProp has been shown to be an effective and practical op- timization algorithm for deep neural networks. It is currently one of the go-to optimization methods being employed routinely by deep learning practitioners. 8.5.3 Adam Adam ( , ) is yet another adaptive learning rate optimization Kingma and Ba 2014 algorithm and is presented in algorithm . The name “Adam” derives from 8.7 the phrase “adaptive moments.” In the context of the earlier algorithms, it is perhaps best seen as a variant on the combination of RMSProp and momentum with a few important distinctions. First, in Adam, momentum is incorporated directly as an estimate of the first order moment (with exponential weighting) of the gradient. The most straightforward way to add momentum to RMSProp is to apply momentum to the rescaled gradients. The use of momentum in combination with rescaling does not have a clear theoretical motivation. Second, Adam includes 308
  • 325. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Algorithm 8.5 The RMSProp algorithm Require: Global learning rate , decay rate .  ρ Require: Initial parameter θ Require: Small constant δ, usually 10−6, used to stabilize division by small numbers. Initialize accumulation variables r = 0 while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1) , . . . , x( ) m } with corresponding targets y( ) i . Compute gradient: g ← 1 m∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Accumulate squared gradient: r r g g ← ρ + (1 ) − ρ  Compute parameter update: ∆θ = −  √ δ+r  g. ( 1 √ δ+r applied element-wise) Apply update: θ θ θ ← + ∆ end while bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments to account for their initialization at the origin (see algorithm ). RMSProp also incorporates an estimate of the 8.7 (uncentered) second-order moment, however it lacks the correction factor. Thus, unlike in Adam, the RMSProp second-order moment estimate may have high bias early in training. Adam is generally regarded as being fairly robust to the choice of hyperparameters, though the learning rate sometimes needs to be changed from the suggested default. 8.5.4 Choosing the Right Optimization Algorithm In this section, we discussed a series of related algorithms that each seek to address the challenge of optimizing deep models by adapting the learning rate for each model parameter. At this point, a natural question is: which algorithm should one choose? Unfortunately, there is currently no consensus on this point. ( ) Schaul et al. 2014 presented a valuable comparison of a large number of optimization algorithms across a wide range of learning tasks. While the results suggest that the family of algorithms with adaptive learning rates (represented by RMSProp and AdaDelta) performed fairly robustly, no single best algorithm has emerged. Currently, the most popular optimization algorithms actively in use include SGD, SGD with momentum, RMSProp, RMSProp with momentum, AdaDelta and Adam. The choice of which algorithm to use, at this point, seems to depend 309
  • 326. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Algorithm 8.6 RMSProp algorithm with Nesterov momentum Require: Global learning rate , decay rate , momentum coefficient .  ρ α Require: Initial parameter , initial velocity . θ v Initialize accumulation variable r = 0 while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1) , . . . , x( ) m } with corresponding targets y( ) i . Compute interim update: θ̃ θ v ← + α Compute gradient: g ← 1 m∇θ̃  i L f ( (x( ) i ; ˜ θ y ), ( ) i ) Accumulate gradient: r r g g ← ρ + (1 ) − ρ  Compute velocity update: v v ← α −  √ r  g. ( 1 √ r applied element-wise) Apply update: θ θ v ← + end while largely on the user’s familiarity with the algorithm (for ease of hyperparameter tuning). 8.6 Approximate Second-Order Methods In this section we discuss the application of second-order methods to the training of deep networks. See ( ) for an earlier treatment of this subject. LeCun et al. 1998a For simplicity of exposition, the only objective function we examine is the empirical risk: J( ) = θ Ex,y∼p̂data ( ) x,y [ ( ( ; ) )] = L f x θ , y 1 m m  i=1 L f ( (x( ) i ; ) θ , y( ) i ). (8.25) However the methods we discuss here extend readily to more general objective functions that, for instance, include parameter regularization terms such as those discussed in chapter . 7 8.6.1 Newton’s Method In section , we introduced second-order gradient methods. In contrast to first- 4.3 order methods, second-order methods make use of second derivatives to improve optimization. The most widely used second-order method is Newton’s method. We now describe Newton’s method in more detail, with emphasis on its application to neural network training. 310
  • 327. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Algorithm 8.7 The Adam algorithm Require: Step size (Suggested default: )  0 001 . Require: Exponential decay rates for moment estimates, ρ1 and ρ2 in [0, 1). (Suggested defaults: and respectively) 0 9 . 0 999 . Require: Small constant δ used for numerical stabilization. (Suggested default: 10−8) Require: Initial parameters θ Initialize 1st and 2nd moment variables , s = 0 r = 0 Initialize time step t = 0 while do stopping criterion not met Sample a minibatch of m examples from the training set {x(1) , . . . , x( ) m } with corresponding targets y( ) i . Compute gradient: g ← 1 m∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) t t ← + 1 Update biased first moment estimate: s ← ρ1s + (1 − ρ1)g Update biased second moment estimate: r ← ρ2r + (1 − ρ2)g g  Correct bias in first moment: ŝ ← s 1−ρt 1 Correct bias in second moment: r̂ ← r 1−ρt 2 Compute update: ∆ = θ − ŝ √ r̂+δ (operations applied element-wise) Apply update: θ θ θ ← + ∆ end while Newton’s method is an optimization scheme based on using a second-order Tay- lor series expansion to approximate J (θ) near some point θ0, ignoring derivatives of higher order: J J ( ) θ ≈ (θ0) + (θ θ − 0) ∇θ J(θ0) + 1 2 (θ θ − 0) H θ θ ( − 0), (8.26) where H is the Hessian of J with respect to θ evaluated at θ0. If we then solve for the critical point of this function, we obtain the Newton parameter update rule: θ∗ = θ0 − H−1 ∇θJ(θ0) (8.27) Thus for a locally quadratic function (with positive definite H), by rescaling the gradient by H −1 , Newton’s method jumps directly to the minimum. If the objective function is convex but not quadratic (there are higher-order terms), this update can be iterated, yielding the training algorithm associated with Newton’s method, given in algorithm . 8.8 311
  • 328. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS Algorithm 8.8 Newton’s method with objective J(θ) = 1 m m i=1 L f ( (x( ) i ; ) θ , y( ) i ). Require: Initial parameter θ0 Require: Training set of examples m while do stopping criterion not met Compute gradient: g ← 1 m∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Compute Hessian: H ← 1 m ∇2 θ  i L f ( (x( ) i ; ) θ , y( ) i ) Compute Hessian inverse: H−1 Compute update: ∆ = θ −H−1 g Apply update: θ θ θ = + ∆ end while For surfaces that are not quadratic, as long as the Hessian remains positive definite, Newton’s method can be applied iteratively. This implies a two-step iterative procedure. First, update or compute the inverse Hessian (i.e. by updat- ing the quadratic approximation). Second, update the parameters according to equation . 8.27 In section , we discussed how Newton’s method is appropriate only when 8.2.3 the Hessian is positive definite. In deep learning, the surface of the objective function is typically non-convex with many features, such as saddle points, that are problematic for Newton’s method. If the eigenvalues of the Hessian are not all positive, for example, near a saddle point, then Newton’s method can actually cause updates to move in the wrong direction. This situation can be avoided by regularizing the Hessian. Common regularization strategies include adding a constant, , along the diagonal of the Hessian. The regularized update becomes α θ∗ = θ0 − [ ( ( H f θ0)) + ] αI −1 ∇θ f(θ0). (8.28) This regularization strategy is used in approximations to Newton’s method, such as the Levenberg–Marquardt algorithm (Levenberg 1944 Marquardt 1963 , ; , ), and works fairly well as long as the negative eigenvalues of the Hessian are still relatively close to zero. In cases where there are more extreme directions of curvature, the value of α would have to be sufficiently large to offset the negative eigenvalues. However, as α increases in size, the Hessian becomes dominated by the αI diagonal and the direction chosen by Newton’s method converges to the standard gradient divided by α. When strong negative curvature is present, α may need to be so large that Newton’s method would make smaller steps than gradient descent with a properly chosen learning rate. Beyond the challenges created by certain features of the objective function, 312
  • 329. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS such as saddle points, the application of Newton’s method for training large neural networks is limited by the significant computational burden it imposes. The number of elements in the Hessian is squared in the number of parameters, so with k parameters (and for even very small neural networks the number of parameters k can be in the millions), Newton’s method would require the inversion of a k k × matrix—with computational complexity of O(k3). Also, since the parameters will change with every update, the inverse Hessian has to be computed at every training iteration. As a consequence, only networks with a very small number of parameters can be practically trained via Newton’s method. In the remainder of this section, we will discuss alternatives that attempt to gain some of the advantages of Newton’s method while side-stepping the computational hurdles. 8.6.2 Conjugate Gradients Conjugate gradients is a method to efficiently avoid the calculation of the inverse Hessian by iteratively descending conjugate directions. The inspiration for this approach follows from a careful study of the weakness of the method of steepest descent (see section for details), where line searches are applied iteratively in 4.3 the direction associated with the gradient. Figure illustrates how the method of 8.6 steepest descent, when applied in a quadratic bowl, progresses in a rather ineffective back-and-forth, zig-zag pattern. This happens because each line search direction, when given by the gradient, is guaranteed to be orthogonal to the previous line search direction. Let the previous search direction be dt−1. At the minimum, where the line search terminates, the directional derivative is zero in direction dt−1: ∇θJ(θ) · dt−1 = 0. Since the gradient at this point defines the current search direction, dt = ∇θJ (θ) will have no contribution in the direction dt−1. Thus dt is orthogonal to dt−1. This relationship between dt−1 and dt is illustrated in figure for 8.6 multiple iterations of steepest descent. As demonstrated in the figure, the choice of orthogonal directions of descent do not preserve the minimum along the previous search directions. This gives rise to the zig-zag pattern of progress, where by descending to the minimum in the current gradient direction, we must re-minimize the objective in the previous gradient direction. Thus, by following the gradient at the end of each line search we are, in a sense, undoing progress we have already made in the direction of the previous line search. The method of conjugate gradients seeks to address this problem. In the method of conjugate gradients, we seek to find a search direction that is conjugate to the previous line search direction, i.e. it will not undo progress made in that direction. At training iteration t, the next search direction dt takes 313
  • 330. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 󰤓 󰤓 󰤓    󰤓 󰤓 󰤓    Figure 8.6: The method of steepest descent applied to a quadratic cost surface. The method of steepest descent involves jumping to the point of lowest cost along the line defined by the gradient at the initial point on each step. This resolves some of the problems seen with using a fixed learning rate in figure , but even with the optimal step size 4.6 the algorithm still makes back-and-forth progress toward the optimum. By definition, at the minimum of the objective along a given direction, the gradient at the final point is orthogonal to that direction. the form: dt = ∇θJ β ( ) + θ tdt−1 (8.29) where βt is a coefficient whose magnitude controls how much of the direction, dt−1, we should add back to the current search direction. Two directions, dt and dt−1, are defined as conjugate if d t Hdt−1 = 0, where H is the Hessian matrix. The straightforward way to impose conjugacy would involve calculation of the eigenvectors of H to choose βt, which would not satisfy our goal of developing a method that is more computationally viable than Newton’s method for large problems. Can we calculate the conjugate directions without resorting to these calculations? Fortunately the answer to that is yes. Two popular methods for computing the βt are: 1. Fletcher-Reeves: βt = ∇θJ(θt)∇θJ(θt) ∇θJ(θt−1)∇θJ(θt−1) (8.30) 314
  • 331. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 2. Polak-Ribière: βt = (∇θJ(θt) − ∇θ J(θt−1)) ∇θJ(θt ) ∇θJ(θt−1)∇θJ(θt−1) (8.31) For a quadratic surface, the conjugate directions ensure that the gradient along the previous direction does not increase in magnitude. We therefore stay at the minimum along the previous directions. As a consequence, in a k-dimensional parameter space, the conjugate gradient method requires at most k line searches to achieve the minimum. The conjugate gradient algorithm is given in algorithm . 8.9 Algorithm 8.9 The conjugate gradient method Require: Initial parameters θ0 Require: Training set of examples m Initialize ρ0 = 0 Initialize g0 = 0 Initialize t = 1 while do stopping criterion not met Initialize the gradient gt = 0 Compute gradient: gt ← 1 m∇θ  i L f ( (x( ) i ; ) θ , y( ) i ) Compute βt = (gt−gt−1)  gt g  t−1gt−1 (Polak-Ribière) (Nonlinear conjugate gradient: optionally reset βt to zero, for example if t is a multiple of some constant , such as ) k k = 5 Compute search direction: ρt = −gt + βtρt−1 Perform line search to find: ∗ = argmin 1 m m i=1 L f ( (x( ) i ; θt + ρt), y( ) i ) (On a truly quadratic cost function, analytically solve for ∗ rather than explicitly searching for it) Apply update: θt+1 = θt + ∗ρt t t ← + 1 end while Nonlinear Conjugate Gradients: So far we have discussed the method of conjugate gradients as it is applied to quadratic objective functions. Of course, our primary interest in this chapter is to explore optimization methods for training neural networks and other related deep learning models where the corresponding objective function is far from quadratic. Perhaps surprisingly, the method of conjugate gradients is still applicable in this setting, though with some modification. Without any assurance that the objective is quadratic, the conjugate directions 315
  • 332. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS are no longer assured to remain at the minimum of the objective for previous directions. As a result, the nonlinear conjugate gradients algorithm includes occasional resets where the method of conjugate gradients is restarted with line search along the unaltered gradient. Practitioners report reasonable results in applications of the nonlinear conjugate gradients algorithm to training neural networks, though it is often beneficial to initialize the optimization with a few iterations of stochastic gradient descent before commencing nonlinear conjugate gradients. Also, while the (nonlinear) conjugate gradients algorithm has traditionally been cast as a batch method, minibatch versions have been used successfully for the training of neural networks ( , Le et al. 2011). Adaptations of conjugate gradients specifically for neural networks have been proposed earlier, such as the scaled conjugate gradients algorithm ( , Moller 1993). 8.6.3 BFGS The Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm attempts to bring some of the advantages of Newton’s method without the computational burden. In that respect, BFGS is similar to the conjugate gradient method. However, BFGS takes a more direct approach to the approximation of Newton’s update. Recall that Newton’s update is given by θ∗ = θ0 − H−1 ∇θ J(θ0), (8.32) where H is the Hessian of J with respect to θ evaluated at θ0. The primary computational difficulty in applying Newton’s update is the calculation of the inverse Hessian H−1 . The approach adopted by quasi-Newton methods (of which the BFGS algorithm is the most prominent) is to approximate the inverse with a matrix Mt that is iteratively refined by low rank updates to become a better approximation of H−1. The specification and derivation of the BFGS approximation is given in many textbooks on optimization, including Luenberger 1984 ( ). Once the inverse Hessian approximation Mt is updated, the direction of descent ρt is determined by ρt = Mtgt. A line search is performed in this direction to determine the size of the step, ∗, taken in this direction. The final update to the parameters is given by: θt+1 = θt + ∗ ρt . (8.33) Like the method of conjugate gradients, the BFGS algorithm iterates a series of line searches with the direction incorporating second-order information. However 316
  • 333. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS unlike conjugate gradients, the success of the approach is not heavily dependent on the line search finding a point very close to the true minimum along the line. Thus, relative to conjugate gradients, BFGS has the advantage that it can spend less time refining each line search. On the other hand, the BFGS algorithm must store the inverse Hessian matrix, M, that requires O(n2) memory, making BFGS impractical for most modern deep learning models that typically have millions of parameters. Limited Memory BFGS (or L-BFGS) The memory costs of the BFGS algorithm can be significantly decreased by avoiding storing the complete inverse Hessian approximationM . The L-BFGS algorithm computes the approximation M using the same method as the BFGS algorithm, but beginning with the assumption that M( 1) t− is the identity matrix, rather than storing the approximation from one step to the next. If used with exact line searches, the directions defined by L-BFGS are mutually conjugate. However, unlike the method of conjugate gradients, this procedure remains well behaved when the minimum of the line search is reached only approximately. The L-BFGS strategy with no storage described here can be generalized to include more information about the Hessian by storing some of the vectors used to update at each time step, which costs only per step. M O n ( ) 8.7 Optimization Strategies and Meta-Algorithms Many optimization techniques are not exactly algorithms, but rather general templates that can be specialized to yield algorithms, or subroutines that can be incorporated into many different algorithms. 8.7.1 Batch Normalization Batch normalization ( , ) is one of the most exciting recent Ioffe and Szegedy 2015 innovations in optimizing deep neural networks and it is actually not an optimization algorithm at all. Instead, it is a method of adaptive reparametrization, motivated by the difficulty of training very deep models. Very deep models involve the composition of several functions or layers. The gradient tells how to update each parameter, under the assumption that the other layers do not change. In practice, we update all of the layers simultaneously. When we make the update, unexpected results can happen because many functions composed together are changed simultaneously, using updates that were computed under the assumption that the other functions remain constant. As a simple 317
  • 334. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS example, suppose we have a deep neural network that has only one unit per layer and does not use an activation function at each hidden layer: ŷ = xw1w2w3 . . . wl. Here, wi provides the weight used by layer i. The output of layer i is hi = hi−1wi. The output ŷ is a linear function of the input x, but a nonlinear function of the weights wi. Suppose our cost function has put a gradient of on 1 ŷ, so we wish to decrease ŷ slightly. The back-propagation algorithm can then compute a gradient g = ∇wŷ. Consider what happens when we make an update w w g ← −  . The first-order Taylor series approximation ofŷ predicts that the value of ŷ will decrease by gg. If we wanted to decrease ŷ by .1, this first-order information available in the gradient suggests we could set the learning rate  to .1 g g . However, the actual update will include second-order and third-order effects, on up to effects of order l. The new value of ŷ is given by x w ( 1 − g1)(w2 − g2) ( . . . wl − gl ). (8.34) An example of one second-order term arising from this update is 2 g1 g2 l i=3 wi. This term might be negligible if l i=3wi is small, or might be exponentially large if the weights on layers through 3 l are greater than . This makes it very hard 1 to choose an appropriate learning rate, because the effects of an update to the parameters for one layer depends so strongly on all of the other layers. Second-order optimization algorithms address this issue by computing an update that takes these second-order interactions into account, but we can see that in very deep networks, even higher-order interactions can be significant. Even second-order optimization algorithms are expensive and usually require numerous approximations that prevent them from truly accounting for all significant second-order interactions. Building an n-th order optimization algorithm for n > 2 thus seems hopeless. What can we do instead? Batch normalization provides an elegant way of reparametrizing almost any deep network. The reparametrization significantly reduces the problem of coordinating updates across many layers. Batch normalization can be applied to any input or hidden layer in a network. Let H be a minibatch of activations of the layer to normalize, arranged as a design matrix, with the activations for each example appearing in a row of the matrix. To normalize , we replace it with H H = H µ − σ , (8.35) where µ is a vector containing the mean of each unit and σ is a vector containing the standard deviation of each unit. The arithmetic here is based on broadcasting the vector µ and the vector σ to be applied to every row of the matrix H . Within each row, the arithmetic is element-wise, so Hi,j is normalized by subtracting µj 318
  • 335. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS and dividing by σj. The rest of the network then operates on H in exactly the same way that the original network operated on . H At training time, µ = 1 m  i Hi,: (8.36) and σ =  δ + 1 m  i ( ) H µ − 2 i , (8.37) where δ is a small positive value such as 10−8 imposed to avoid encountering the undefined gradient of √ z at z = 0. Crucially, we back-propagate through these operations for computing the mean and the standard deviation, and for applying them to normalize H. This means that the gradient will never propose an operation that acts simply to increase the standard deviation or mean of hi; the normalization operations remove the effect of such an action and zero out its component in the gradient. This was a major innovation of the batch normalization approach. Previous approaches had involved adding penalties to the cost function to encourage units to have normalized activation statistics or involved intervening to renormalize unit statistics after each gradient descent step. The former approach usually resulted in imperfect normalization and the latter usually resulted in significant wasted time as the learning algorithm repeatedly proposed changing the mean and variance and the normalization step repeatedly undid this change. Batch normalization reparametrizes the model to make some units always be standardized by definition, deftly sidestepping both problems. At test time, µ and σ may be replaced by running averages that were collected during training time. This allows the model to be evaluated on a single example, without needing to use definitions of µ and σ that depend on an entire minibatch. Revisiting the ŷ = xw1w2 . . . wl example, we see that we can mostly resolve the difficulties in learning this model by normalizing hl−1. Suppose that x is drawn from a unit Gaussian. Then hl−1 will also come from a Gaussian, because the transformation from x to hl is linear. However, hl−1 will no longer have zero mean and unit variance. After applying batch normalization, we obtain the normalized ĥl−1 that restores the zero mean and unit variance properties. For almost any update to the lower layers, ĥl−1 will remain a unit Gaussian. The output ŷ may then be learned as a simple linear function ŷ = wlĥl−1. Learning in this model is now very simple because the parameters at the lower layers simply do not have an effect in most cases; their output is always renormalized to a unit Gaussian. In some corner cases, the lower layers can have an effect. Changing one of the lower layer weights to can make the output become degenerate, and changing the sign 0 319
  • 336. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS of one of the lower weights can flip the relationship between ĥl−1 and y. These situations are very rare. Without normalization, nearly every update would have an extreme effect on the statistics of hl−1. Batch normalization has thus made this model significantly easier to learn. In this example, the ease of learning of course came at the cost of making the lower layers useless. In our linear example, the lower layers no longer have any harmful effect, but they also no longer have any beneficial effect. This is because we have normalized out the first and second order statistics, which is all that a linear network can influence. In a deep neural network with nonlinear activation functions, the lower layers can perform nonlinear transformations of the data, so they remain useful. Batch normalization acts to standardize only the mean and variance of each unit in order to stabilize learning, but allows the relationships between units and the nonlinear statistics of a single unit to change. Because the final layer of the network is able to learn a linear transformation, we may actually wish to remove all linear relationships between units within a layer. Indeed, this is the approach taken by ( ), who provided Desjardins et al. 2015 the inspiration for batch normalization. Unfortunately, eliminating all linear interactions is much more expensive than standardizing the mean and standard deviation of each individual unit, and so far batch normalization remains the most practical approach. Normalizing the mean and standard deviation of a unit can reduce the expressive power of the neural network containing that unit. In order to maintain the expressive power of the network, it is common to replace the batch of hidden unit activations H with γH +β rather than simply the normalized H. The variables γ and β are learned parameters that allow the new variable to have any mean and standard deviation. At first glance, this may seem useless—why did we set the mean to 0, and then introduce a parameter that allows it to be set back to any arbitrary value β? The answer is that the new parametrization can represent the same family of functions of the input as the old parametrization, but the new parametrization has different learning dynamics. In the old parametrization, the mean of H was determined by a complicated interaction between the parameters in the layers below H. In the new parametrization, the mean of γH + β is determined solely by β. The new parametrization is much easier to learn with gradient descent. Most neural network layers take the form of φ(XW + b) where φ is some fixed nonlinear activation function such as the rectified linear transformation. It is natural to wonder whether we should apply batch normalization to the input X, or to the transformed value XW + b. ( ) recommend Ioffe and Szegedy 2015 320
  • 337. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS the latter. More specifically, XW + b should be replaced by a normalized version of XW . The bias term should be omitted because it becomes redundant with the β parameter applied by the batch normalization reparametrization. The input to a layer is usually the output of a nonlinear activation function such as the rectified linear function in a previous layer. The statistics of the input are thus more non-Gaussian and less amenable to standardization by linear operations. In convolutional networks, described in chapter , it is important to apply the 9 same normalizing µ and σ at every spatial location within a feature map, so that the statistics of the feature map remain the same regardless of spatial location. 8.7.2 Coordinate Descent In some cases, it may be possible to solve an optimization problem quickly by breaking it into separate pieces. If we minimize f(x) with respect to a single variable xi, then minimize it with respect to another variable xj and so on, repeatedly cycling through all variables, we are guaranteed to arrive at a (local) minimum. This practice is known as coordinate descent, because we optimize one coordinate at a time. More generally, block coordinate descent refers to minimizing with respect to a subset of the variables simultaneously. The term “coordinate descent” is often used to refer to block coordinate descent as well as the strictly individual coordinate descent. Coordinate descent makes the most sense when the different variables in the optimization problem can be clearly separated into groups that play relatively isolated roles, or when optimization with respect to one group of variables is significantly more efficient than optimization with respect to all of the variables. For example, consider the cost function J , (H W ) =  i,j |Hi,j| +  i,j  X W −  H 2 i,j . (8.38) This function describes a learning problem called sparse coding, where the goal is to find a weight matrix W that can linearly decode a matrix of activation values H to reconstruct the training set X. Most applications of sparse coding also involve weight decay or a constraint on the norms of the columns of W, in order to prevent the pathological solution with extremely small and large . H W The function J is not convex. However, we can divide the inputs to the training algorithm into two sets: the dictionary parameters W and the code representations H . Minimizing the objective function with respect to either one of these sets of variables is a convex problem. Block coordinate descent thus gives 321
  • 338. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS us an optimization strategy that allows us to use efficient convex optimization algorithms, by alternating between optimizing W with H fixed, then optimizing H W with fixed. Coordinate descent is not a very good strategy when the value of one variable strongly influences the optimal value of another variable, as in the function f(x) = (x1 − x2 )2 + α  x2 1 + x2 2  where α is a positive constant. The first term encourages the two variables to have similar value, while the second term encourages them to be near zero. The solution is to set both to zero. Newton’s method can solve the problem in a single step because it is a positive definite quadratic problem. However, for small α, coordinate descent will make very slow progress because the first term does not allow a single variable to be changed to a value that differs significantly from the current value of the other variable. 8.7.3 Polyak Averaging Polyak averaging (Polyak and Juditsky 1992 , ) consists of averaging together several points in the trajectory through parameter space visited by an optimization algorithm. If t iterations of gradient descent visit points θ(1), . . . , θ( ) t , then the output of the Polyak averaging algorithm is ˆ θ( ) t = 1 t  i θ( ) i . On some problem classes, such as gradient descent applied to convex problems, this approach has strong convergence guarantees. When applied to neural networks, its justification is more heuristic, but it performs well in practice. The basic idea is that the optimization algorithm may leap back and forth across a valley several times without ever visiting a point near the bottom of the valley. The average of all of the locations on either side should be close to the bottom of the valley though. In non-convex problems, the path taken by the optimization trajectory can be very complicated and visit many different regions. Including points in parameter space from the distant past that may be separated from the current point by large barriers in the cost function does not seem like a useful behavior. As a result, when applying Polyak averaging to non-convex problems, it is typical to use an exponentially decaying running average: θ̂( ) t = αθ̂( 1) t− + (1 ) − α θ( ) t . (8.39) The running average approach is used in numerous applications. See Szegedy et al. ( ) for a recent example. 2015 322
  • 339. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 8.7.4 Supervised Pretraining Sometimes, directly training a model to solve a specific task can be too ambitious if the model is complex and hard to optimize or if the task is very difficult. It is sometimes more effective to train a simpler model to solve the task, then make the model more complex. It can also be more effective to train the model to solve a simpler task, then move on to confront the final task. These strategies that involve training simple models on simple tasks before confronting the challenge of training the desired model to perform the desired task are collectively known as pretraining. Greedy algorithms break a problem into many components, then solve for the optimal version of each component in isolation. Unfortunately, combining the individually optimal components is not guaranteed to yield an optimal complete solution. However, greedy algorithms can be computationally much cheaper than algorithms that solve for the best joint solution, and the quality of a greedy solution is often acceptable if not optimal. Greedy algorithms may also be followed by a fine-tuning stage in which a joint optimization algorithm searches for an optimal solution to the full problem. Initializing the joint optimization algorithm with a greedy solution can greatly speed it up and improve the quality of the solution it finds. Pretraining, and especially greedy pretraining, algorithms are ubiquitous in deep learning. In this section, we describe specifically those pretraining algorithms that break supervised learning problems into other simpler supervised learning problems. This approach is known as . greedy supervised pretraining In the original ( , ) version of greedy supervised pretraining, Bengio et al. 2007 each stage consists of a supervised learning training task involving only a subset of the layers in the final neural network. An example of greedy supervised pretraining is illustrated in figure , in which each added hidden layer is pretrained as part 8.7 of a shallow supervised MLP, taking as input the output of the previously trained hidden layer. Instead of pretraining one layer at a time, Simonyan and Zisserman ( ) pretrain a deep convolutional network (eleven weight layers) and then use 2015 the first four and last three layers from this network to initialize even deeper networks (with up to nineteen layers of weights). The middle layers of the new, very deep network are initialized randomly. The new network is then jointly trained. Another option, explored by Yu 2010 et al. ( ) is to use the of the previously outputs trained MLPs, as well as the raw input, as inputs for each added stage. Why would greedy supervised pretraining help? The hypothesis initially discussed by ( ) is that it helps to provide better guidance to the Bengio et al. 2007 323
  • 340. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS y y h(1) h(1) x x (a) U(1) U(1) W(1) W(1) y y h(1) h(1) x x (b) U(1) U(1) W(1) W(1) y y h(1) h(1) x x (c) U(1) U(1) W(1) W(1) h(2) h(2) y y U(2) U(2) W(2) W(2) y y h(1) h(1) x x (d) U(1) U(1) W(1) W(1) h(2) h(2) y U(2) U(2) W(2) W(2) Figure 8.7: Illustration of one form of greedy supervised pretraining ( , ). Bengio et al. 2007 (a)We start by training a sufficiently shallow architecture. Another drawing of the (b) same architecture. We keep only the input-to-hidden layer of the original network and (c) discard the hidden-to-output layer. We send the output of the first hidden layer as input to another supervised single hidden layer MLP that is trained with the same objective as the first network was, thus adding a second hidden layer. This can be repeated for as many layers as desired. Another drawing of the result, viewed as a feedforward network. (d) To further improve the optimization, we can jointly fine-tune all the layers, either only at the end or at each stage of this process. 324
  • 341. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS intermediate levels of a deep hierarchy. In general, pretraining may help both in terms of optimization and in terms of generalization. An approach related to supervised pretraining extends the idea to the context of transfer learning: Yosinski 2014 et al. ( ) pretrain a deep convolutional net with 8 layers of weights on a set of tasks (a subset of the 1000 ImageNet object categories) and then initialize a same-size network with the first k layers of the first net. All the layers of the second network (with the upper layers initialized randomly) are then jointly trained to perform a different set of tasks (another subset of the 1000 ImageNet object categories), with fewer training examples than for the first set of tasks. Other approaches to transfer learning with neural networks are discussed in section . 15.2 Another related line of work is the FitNets ( , ) approach. Romero et al. 2015 This approach begins by training a network that has low enough depth and great enough width (number of units per layer) to be easy to train. This network then becomes a teacher for a second network, designated the student. The student network is much deeper and thinner (eleven to nineteen layers) and would be difficult to train with SGD under normal circumstances. The training of the student network is made easier by training the student network not only to predict the output for the original task, but also to predict the value of the middle layer of the teacher network. This extra task provides a set of hints about how the hidden layers should be used and can simplify the optimization problem. Additional parameters are introduced to regress the middle layer of the 5-layer teacher network from the middle layer of the deeper student network. However, instead of predicting the final classification target, the objective is to predict the middle hidden layer of the teacher network. The lower layers of the student networks thus have two objectives: to help the outputs of the student network accomplish their task, as well as to predict the intermediate layer of the teacher network. Although a thin and deep network appears to be more difficult to train than a wide and shallow network, the thin and deep network may generalize better and certainly has lower computational cost if it is thin enough to have far fewer parameters. Without the hints on the hidden layer, the student network performs very poorly in the experiments, both on the training and test set. Hints on middle layers may thus be one of the tools to help train neural networks that otherwise seem difficult to train, but other optimization techniques or changes in the architecture may also solve the problem. 325
  • 342. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS 8.7.5 Designing Models to Aid Optimization To improve optimization, the best strategy is not always to improve the optimization algorithm. Instead, many improvements in the optimization of deep models have come from designing the models to be easier to optimize. In principle, we could use activation functions that increase and decrease in jagged non-monotonic patterns. However, this would make optimization extremely difficult. In practice, it is more important to choose a model family that is easy to optimize than to use a powerful optimization algorithm. Most of the advances in neural network learning over the past 30 years have been obtained by changing the model family rather than changing the optimization procedure. Stochastic gradient descent with momentum, which was used to train neural networks in the 1980s, remains in use in modern state of the art neural network applications. Specifically, modern neural networks reflect a design choice to use linear trans- formations between layers and activation functions that are differentiable almost everywhere and have significant slope in large portions of their domain. In par- ticular, model innovations like the LSTM, rectified linear units and maxout units have all moved toward using more linear functions than previous models like deep networks based on sigmoidal units. These models have nice properties that make optimization easier. The gradient flows through many layers provided that the Jacobian of the linear transformation has reasonable singular values. Moreover, linear functions consistently increase in a single direction, so even if the model’s output is very far from correct, it is clear simply from computing the gradient which direction its output should move to reduce the loss function. In other words, modern neural nets have been designed so that their local gradient information corresponds reasonably well to moving toward a distant solution. Other model design strategies can help to make optimization easier. For example, linear paths or skip connections between layers reduce the length of the shortest path from the lower layer’s parameters to the output, and thus mitigate the vanishing gradient problem (Srivastava 2015 et al., ). A related idea to skip connections is adding extra copies of the output that are attached to the intermediate hidden layers of the network, as in GoogLeNet ( , ) Szegedy et al. 2014a and deeply-supervised nets ( , ). These “auxiliary heads” are trained Lee et al. 2014 to perform the same task as the primary output at the top of the network in order to ensure that the lower layers receive a large gradient. When training is complete the auxiliary heads may be discarded. This is an alternative to the pretraining strategies, which were introduced in the previous section. In this way, one can train jointly all the layers in a single phase but change the architecture, so that intermediate layers (especially the lower ones) can get some hints about what they 326
  • 343. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS should do, via a shorter path. These hints provide an error signal to lower layers. 8.7.6 Continuation Methods and Curriculum Learning As argued in section , many of the challenges in optimization arise from the 8.2.7 global structure of the cost function and cannot be resolved merely by making better estimates of local update directions. The predominant strategy for overcoming this problem is to attempt to initialize the parameters in a region that is connected to the solution by a short path through parameter space that local descent can discover. Continuation methods are a family of strategies that can make optimization easier by choosing initial points to ensure that local optimization spends most of its time in well-behaved regions of space. The idea behind continuation methods is to construct a series of objective functions over the same parameters. In order to minimize a cost function J(θ), we will construct new cost functions {J(0), . . . , J( ) n }. These cost functions are designed to be increasingly difficult, with J(0) being fairly easy to minimize, and J( ) n , the most difficult, being J(θ), the true cost function motivating the entire process. When we say that J( ) i is easier than J ( +1) i , we mean that it is well behaved over more of θ space. A random initialization is more likely to land in the region where local descent can minimize the cost function successfully because this region is larger. The series of cost functions are designed so that a solution to one is a good initial point of the next. We thus begin by solving an easy problem then refine the solution to solve incrementally harder problems until we arrive at a solution to the true underlying problem. Traditional continuation methods (predating the use of continuation methods for neural network training) are usually based on smoothing the objective function. See Wu 1997 ( ) for an example of such a method and a review of some related methods. Continuation methods are also closely related to simulated annealing, which adds noise to the parameters (Kirkpatrick 1983 et al., ). Continuation methods have been extremely successful in recent years. See Mobahi and Fisher ( ) for an overview of recent literature, especially for AI applications. 2015 Continuation methods traditionally were mostly designed with the goal of overcoming the challenge of local minima. Specifically, they were designed to reach a global minimum despite the presence of many local minima. To do so, these continuation methods would construct easier cost functions by “blurring” the original cost function. This blurring operation can be done by approximating J( ) i ( ) = θ Eθ∼N(θ;θ,σ( )2 i )J(θ ) (8.40) via sampling. The intuition for this approach is that some non-convex functions 327
  • 344. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS become approximately convex when blurred. In many cases, this blurring preserves enough information about the location of a global minimum that we can find the global minimum by solving progressively less blurred versions of the problem. This approach can break down in three different ways. First, it might successfully define a series of cost functions where the first is convex and the optimum tracks from one function to the next arriving at the global minimum, but it might require so many incremental cost functions that the cost of the entire procedure remains high. NP-hard optimization problems remain NP-hard, even when continuation methods are applicable. The other two ways that continuation methods fail both correspond to the method not being applicable. First, the function might not become convex, no matter how much it is blurred. Consider for example the function J(θ) = −θθ. Second, the function may become convex as a result of blurring, but the minimum of this blurred function may track to a local rather than a global minimum of the original cost function. Though continuation methods were mostly originally designed to deal with the problem of local minima, local minima are no longer believed to be the primary problem for neural network optimization. Fortunately, continuation methods can still help. The easier objective functions introduced by the continuation method can eliminate flat regions, decrease variance in gradient estimates, improve conditioning of the Hessian matrix, or do anything else that will either make local updates easier to compute or improve the correspondence between local update directions and progress toward a global solution. Bengio 2009 et al. ( ) observed that an approach called curriculum learning or shaping can be interpreted as a continuation method. Curriculum learning is based on the idea of planning a learning process to begin by learning simple concepts and progress to learning more complex concepts that depend on these simpler concepts. This basic strategy was previously known to accelerate progress in animal training ( , ; , ; Skinner 1958 Peterson 2004 Krueger and Dayan 2009 , ) and machine learning ( , ; , ; , ). ( ) Solomonoff 1989 Elman 1993 Sanger 1994 Bengio et al. 2009 justified this strategy as a continuation method, where earlier J( ) i are made easier by increasing the influence of simpler examples (either by assigning their contributions to the cost function larger coefficients, or by sampling them more frequently), and experimentally demonstrated that better results could be obtained by following a curriculum on a large-scale neural language modeling task. Curriculum learning has been successful on a wide range of natural language (Spitkovsky 2010 et al., ; Collobert 2011a Mikolov 2011b Tu and Honavar 2011 et al., ; et al., ; , ) and computer vision ( , ; , ; , ) Kumar et al. 2010 Lee and Grauman 2011 Supancic and Ramanan 2013 tasks. Curriculum learning was also verified as being consistent with the way in which humans teach ( , ): teachers start by showing easier and Khan et al. 2011 328
  • 345. CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS more prototypical examples and then help the learner refine the decision surface with the less obvious cases. Curriculum-based strategies are more effective for teaching humans than strategies based on uniform sampling of examples, and can also increase the effectiveness of other teaching strategies ( , Basu and Christensen 2013). Another important contribution to research on curriculum learning arose in the context of training recurrent neural networks to capture long-term dependencies: Zaremba and Sutskever 2014 ( ) found that much better results were obtained with a stochastic curriculum, in which a random mix of easy and difficult examples is always presented to the learner, but where the average proportion of the more difficult examples (here, those with longer-term dependencies) is gradually increased. With a deterministic curriculum, no improvement over the baseline (ordinary training from the full training set) was observed. We have now described the basic family of neural network models and how to regularize and optimize them. In the chapters ahead, we turn to specializations of the neural network family, that allow neural networks to scale to very large sizes and process input data that has special structure. The optimization methods discussed in this chapter are often directly applicable to these specialized architectures with little or no modification. 329
  • 346. Chapter 9 Convolutional Networks Convolutional networks ( , ), also known as LeCun 1989 convolutional neural networks or CNNs, are a specialized kind of neural network for processing data that has a known, grid-like topology. Examples include time-series data, which can be thought of as a 1D grid taking samples at regular time intervals, and image data, which can be thought of as a 2D grid of pixels. Convolutional networks have been tremendously successful in practical applications. The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers. In this chapter, we will first describe what convolution is. Next, we will explain the motivation behind using convolution in a neural network. We will then describe an operation called pooling, which almost all convolutional networks employ. Usually, the operation used in a convolutional neural network does not correspond precisely to the definition of convolution as used in other fields such as engineering or pure mathematics. We will describe several variants on the convolution function that are widely used in practice for neural networks. We will also show how convolution may be applied to many kinds of data, with different numbers of dimensions. We then discuss means of making convolution more efficient. Convolutional networks stand out as an example of neuroscientific principles influencing deep learning. We will discuss these neuroscientific principles, then conclude with comments about the role convolutional networks have played in the history of deep learning. One topic this chapter does not address is how to choose the architecture of your convolutional network. The goal of this chapter is to describe the kinds of tools that convolutional networks provide, while chapter 11 330
  • 347. CHAPTER 9. CONVOLUTIONAL NETWORKS describes general guidelines for choosing which tools to use in which circumstances. Research into convolutional network architectures proceeds so rapidly that a new best architecture for a given benchmark is announced every few weeks to months, rendering it impractical to describe the best architecture in print. However, the best architectures have consistently been composed of the building blocks described here. 9.1 The Convolution Operation In its most general form, convolution is an operation on two functions of a real- valued argument. To motivate the definition of convolution, we start with examples of two functions we might use. Suppose we are tracking the location of a spaceship with a laser sensor. Our laser sensor provides a single output x(t), the position of the spaceship at time t. Both x and t are real-valued, i.e., we can get a different reading from the laser sensor at any instant in time. Now suppose that our laser sensor is somewhat noisy. To obtain a less noisy estimate of the spaceship’s position, we would like to average together several measurements. Of course, more recent measurements are more relevant, so we will want this to be a weighted average that gives more weight to recent measurements. We can do this with a weighting function w(a), where a is the age of a measurement. If we apply such a weighted average operation at every moment, we obtain a new function providing a smoothed estimate of the position of the spaceship: s s t ( ) =  x a w t a da ( ) ( − ) (9.1) This operation is called convolution. The convolution operation is typically denoted with an asterisk: s t x w t ( ) = ( ∗ )( ) (9.2) In our example, w needs to be a valid probability density function, or the output is not a weighted average. Also, w needs to be for all negative arguments, 0 or it will look into the future, which is presumably beyond our capabilities. These limitations are particular to our example though. In general, convolution is defined for any functions for which the above integral is defined, and may be used for other purposes besides taking weighted averages. In convolutional network terminology, the first argument (in this example, the function x) to the convolution is often referred to as the input and the second 331
  • 348. CHAPTER 9. CONVOLUTIONAL NETWORKS argument (in this example, the function w) as the kernel. The output is sometimes referred to as the . feature map In our example, the idea of a laser sensor that can provide measurements at every instant in time is not realistic. Usually, when we work with data on a computer, time will be discretized, and our sensor will provide data at regular intervals. In our example, it might be more realistic to assume that our laser provides a measurement once per second. The time index t can then take on only integer values. If we now assume that x and w are defined only on integer t, we can define the discrete convolution: s t x w t ( ) = ( ∗ )( ) = ∞  a=−∞ x a w t a ( ) ( − ) (9.3) In machine learning applications, the input is usually a multidimensional array of data and the kernel is usually a multidimensional array of parameters that are adapted by the learning algorithm. We will refer to these multidimensional arrays as tensors. Because each element of the input and kernel must be explicitly stored separately, we usually assume that these functions are zero everywhere but the finite set of points for which we store the values. This means that in practice we can implement the infinite summation as a summation over a finite number of array elements. Finally, we often use convolutions over more than one axis at a time. For example, if we use a two-dimensional image I as our input, we probably also want to use a two-dimensional kernel : K S i, j I K i, j ( ) = ( ∗ )( ) =  m  n I m, n K i m, j n . ( ) ( − − ) (9.4) Convolution is commutative, meaning we can equivalently write: S i, j K I i, j ( ) = ( ∗ )( ) =  m  n I i m, j n K m, n . ( − − ) ( ) (9.5) Usually the latter formula is more straightforward to implement in a machine learning library, because there is less variation in the range of valid values of m and . n The commutative property of convolution arises because we have flipped the kernel relative to the input, in the sense that as m increases, the index into the input increases, but the index into the kernel decreases. The only reason to flip the kernel is to obtain the commutative property. While the commutative property 332
  • 349. CHAPTER 9. CONVOLUTIONAL NETWORKS is useful for writing proofs, it is not usually an important property of a neural network implementation. Instead, many neural network libraries implement a related function called the cross-correlation, which is the same as convolution but without flipping the kernel: S i, j I K i, j ( ) = ( ∗ )( ) =  m  n I i m, j n K m, n . ( + + ) ( ) (9.6) Many machine learning libraries implement cross-correlation but call it convolution. In this text we will follow this convention of calling both operations convolution, and specify whether we mean to flip the kernel or not in contexts where kernel flipping is relevant. In the context of machine learning, the learning algorithm will learn the appropriate values of the kernel in the appropriate place, so an algorithm based on convolution with kernel flipping will learn a kernel that is flipped relative to the kernel learned by an algorithm without the flipping. It is also rare for convolution to be used alone in machine learning; instead convolution is used simultaneously with other functions, and the combination of these functions does not commute regardless of whether the convolution operation flips its kernel or not. See figure for an example of convolution (without kernel flipping) applied 9.1 to a 2-D tensor. Discrete convolution can be viewed as multiplication by a matrix. However, the matrix has several entries constrained to be equal to other entries. For example, for univariate discrete convolution, each row of the matrix is constrained to be equal to the row above shifted by one element. This is known as a Toeplitz matrix. In two dimensions, a doubly block circulant matrix corresponds to convolution. In addition to these constraints that several elements be equal to each other, convolution usually corresponds to a very sparse matrix (a matrix whose entries are mostly equal to zero). This is because the kernel is usually much smaller than the input image. Any neural network algorithm that works with matrix multiplication and does not depend on specific properties of the matrix structure should work with convolution, without requiring any further changes to the neural network. Typical convolutional neural networks do make use of further specializations in order to deal with large inputs efficiently, but these are not strictly necessary from a theoretical perspective. 333
  • 350. CHAPTER 9. CONVOLUTIONAL NETWORKS a b c d e f g h i j k l w x y z aw + bx + ey + fz aw + bx + ey + fz bw + cx + fy + gz bw + cx + fy + gz cw + dx + gy + hz cw + dx + gy + hz ew + fx + iy + jz ew + fx + iy + jz fw + gx + jy + kz fw + gx + jy + kz gw + hx + ky + lz gw + hx + ky + lz Input Kernel Output Figure 9.1: An example of 2-D convolution without kernel-flipping. In this case we restrict the output to only positions where the kernel lies entirely within the image, called “valid” convolution in some contexts. We draw boxes with arrows to indicate how the upper-left element of the output tensor is formed by applying the kernel to the corresponding upper-left region of the input tensor. 334
  • 351. CHAPTER 9. CONVOLUTIONAL NETWORKS 9.2 Motivation Convolution leverages three important ideas that can help improve a machine learning system: sparse interactions, parameter sharing and equivariant representations. Moreover, convolution provides a means for working with inputs of variable size. We now describe each of these ideas in turn. Traditional neural network layers use matrix multiplication by a matrix of parameters with a separate parameter describing the interaction between each input unit and each output unit. This means every output unit interacts with every input unit. Convolutional networks, however, typically have sparse interactions (also referred to as sparse connectivity or sparse weights). This is accomplished by making the kernel smaller than the input. For example, when processing an image, the input image might have thousands or millions of pixels, but we can detect small, meaningful features such as edges with kernels that occupy only tens or hundreds of pixels. This means that we need to store fewer parameters, which both reduces the memory requirements of the model and improves its statistical efficiency. It also means that computing the output requires fewer operations. These improvements in efficiency are usually quite large. If there are m inputs and n outputs, then matrix multiplication requires m n × parameters and the algorithms used in practice have O(m n × ) runtime (per example). If we limit the number of connections each output may have to k, then the sparsely connected approach requires only k n × parameters and O(k n × ) runtime. For many practical applications, it is possible to obtain good performance on the machine learning task while keeping k several orders of magnitude smaller than m. For graphical demonstrations of sparse connectivity, see figure and figure . In a deep convolutional network, 9.2 9.3 units in the deeper layers may indirectly interact with a larger portion of the input, as shown in figure . This allows the network to efficiently describe complicated 9.4 interactions between many variables by constructing such interactions from simple building blocks that each describe only sparse interactions. Parameter sharing refers to using the same parameter for more than one function in a model. In a traditional neural net, each element of the weight matrix is used exactly once when computing the output of a layer. It is multiplied by one element of the input and then never revisited. As a synonym for parameter sharing, one can say that a network has tied weights, because the value of the weight applied to one input is tied to the value of a weight applied elsewhere. In a convolutional neural net, each member of the kernel is used at every position of the input (except perhaps some of the boundary pixels, depending on the design decisions regarding the boundary). The parameter sharing used by the convolution operation means that rather than learning a separate set of parameters 335
  • 352. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 Figure 9.2: Sparse connectivity, viewed from below: We highlight one input unit, x3, and also highlight the output units in s that are affected by this unit. (Top)When s is formed by convolution with a kernel of width , only three outputs are affected by 3 x. (Bottom)When is formed by matrix multiplication, connectivity is no longer sparse, so s all of the outputs are affected by x3. 336
  • 353. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 Figure 9.3: Sparse connectivity, viewed from above: We highlight one output unit,s3, and also highlight the input units in x that affect this unit. These units are known as the receptive field of s3. (Top)When s is formed by convolution with a kernel of width , only three inputs affect 3 s3. When (Bottom) s is formed by matrix multiplication, connectivity is no longer sparse, so all of the inputs affect s 3. x1 x1 x2 x2 x3 x3 h2 h2 h1 h1 h3 h3 x4 x4 h4 h4 x5 x5 h5 h5 g2 g2 g1 g1 g3 g3 g4 g4 g5 g5 Figure 9.4: The receptive field of the units in the deeper layers of a convolutional network is larger than the receptive field of the units in the shallow layers. This effect increases if the network includes architectural features like strided convolution (figure ) or pooling 9.12 (section ). This means that even though 9.3 direct connections in a convolutional net are very sparse, units in the deeper layers can be indirectly connected to all or most of the input image. 337
  • 354. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 x4 x4 x5 x5 s2 s2 s1 s1 s3 s3 s4 s4 s5 s5 Figure 9.5: Parameter sharing: Black arrows indicate the connections that use a particular parameter in two different models. (Top)The black arrows indicate uses of the central element of a 3-element kernel in a convolutional model. Due to parameter sharing, this single parameter is used at all input locations. The single black arrow indicates (Bottom) the use of the central element of the weight matrix in a fully connected model. This model has no parameter sharing so the parameter is used only once. for every location, we learn only one set. This does not affect the runtime of forward propagation—it is still O(k n × )—but it does further reduce the storage requirements of the model to k parameters. Recall that k is usually several orders of magnitude less than m. Since m and n are usually roughly the same size, k is practically insignificant compared to m n × . Convolution is thus dramatically more efficient than dense matrix multiplication in terms of the memory requirements and statistical efficiency. For a graphical depiction of how parameter sharing works, see figure . 9.5 As an example of both of these first two principles in action, figure shows 9.6 how sparse connectivity and parameter sharing can dramatically improve the efficiency of a linear function for detecting edges in an image. In the case of convolution, the particular form of parameter sharing causes the layer to have a property called equivariance to translation. To say a function is equivariant means that if the input changes, the output changes in the same way. Specifically, a function f(x) is equivariant to a function g if f(g(x)) = g(f(x)). In the case of convolution, if we let g be any function that translates the input, i.e., shifts it, then the convolution function is equivariant to g. For example, let I be a function giving image brightness at integer coordinates. Let g be a function 338
  • 355. CHAPTER 9. CONVOLUTIONAL NETWORKS mapping one image function to another image function, such that I = g(I) is the image function with I (x, y) = I(x − 1, y). This shifts every pixel of I one unit to the right. If we apply this transformation to I, then apply convolution, the result will be the same as if we applied convolution to I , then applied the transformation g to the output. When processing time series data, this means that convolution produces a sort of timeline that shows when different features appear in the input. If we move an event later in time in the input, the exact same representation of it will appear in the output, just later in time. Similarly with images, convolution creates a 2-D map of where certain features appear in the input. If we move the object in the input, its representation will move the same amount in the output. This is useful for when we know that some function of a small number of neighboring pixels is useful when applied to multiple input locations. For example, when processing images, it is useful to detect edges in the first layer of a convolutional network. The same edges appear more or less everywhere in the image, so it is practical to share parameters across the entire image. In some cases, we may not wish to share parameters across the entire image. For example, if we are processing images that are cropped to be centered on an individual’s face, we probably want to extract different features at different locations—the part of the network processing the top of the face needs to look for eyebrows, while the part of the network processing the bottom of the face needs to look for a chin. Convolution is not naturally equivariant to some other transformations, such as changes in the scale or rotation of an image. Other mechanisms are necessary for handling these kinds of transformations. Finally, some kinds of data cannot be processed by neural networks defined by matrix multiplication with a fixed-shape matrix. Convolution enables processing of some of these kinds of data. We discuss this further in section . 9.7 9.3 Pooling A typical layer of a convolutional network consists of three stages (see figure ). 9.7 In the first stage, the layer performs several convolutions in parallel to produce a set of linear activations. In the second stage, each linear activation is run through a nonlinear activation function, such as the rectified linear activation function. This stage is sometimes called the detector stage. In the third stage, we use a pooling function to modify the output of the layer further. A pooling function replaces the output of the net at a certain location with a summary statistic of the nearby outputs. For example, the max pooling (Zhou 339
  • 356. CHAPTER 9. CONVOLUTIONAL NETWORKS Figure 9.6: Efficiency of edge detection. The image on the right was formed by taking each pixel in the original image and subtracting the value of its neighboring pixel on the left. This shows the strength of all of the vertically oriented edges in the input image, which can be a useful operation for object detection. Both images are 280 pixels tall. The input image is 320 pixels wide while the output image is 319 pixels wide. This transformation can be described by a convolution kernel containing two elements, and requires 319 × 280 × 3 = 267, 960 floating point operations (two multiplications and one addition per output pixel) to compute using convolution. To describe the same transformation with a matrix multiplication would take 320× 280× 319 × 280, or over eight billion, entries in the matrix, making convolution four billion times more efficient for representing this transformation. The straightforward matrix multiplication algorithm performs over sixteen billion floating point operations, making convolution roughly 60,000 times more efficient computationally. Of course, most of the entries of the matrix would be zero. If we stored only the nonzero entries of the matrix, then both matrix multiplication and convolution would require the same number of floating point operations to compute. The matrix would still need to contain 2 × 319 × 280 = 178, 640 entries. Convolution is an extremely efficient way of describing transformations that apply the same linear transformation of a small, local region across the entire input. (Photo credit: Paula Goodfellow) 340
  • 357. CHAPTER 9. CONVOLUTIONAL NETWORKS Convolutional Layer Input to layer Convolution stage: A ne transform ffi Detector stage: Nonlinearity e.g., rectified linear Pooling stage Next layer Input to layers Convolution layer: A ne transform ffi Detector layer: Nonlinearity e.g., rectified linear Pooling layer Next layer Complex layer terminology Simple layer terminology Figure 9.7: The components of a typical convolutional neural network layer. There are two commonly used sets of terminology for describing these layers. (Left)In this terminology, the convolutional net is viewed as a small number of relatively complex layers, with each layer having many “stages.” In this terminology, there is a one-to-one mapping between kernel tensors and network layers. In this book we generally use this terminology. (Right)In this terminology, the convolutional net is viewed as a larger number of simple layers; every step of processing is regarded as a layer in its own right. This means that not every “layer” has parameters. 341
  • 358. CHAPTER 9. CONVOLUTIONAL NETWORKS and Chellappa 1988 , ) operation reports the maximum output within a rectangular neighborhood. Other popular pooling functions include the average of a rectangular neighborhood, the L2 norm of a rectangular neighborhood, or a weighted average based on the distance from the central pixel. In all cases, pooling helps to make the representation become approximately invariant to small translations of the input. Invariance to translation means that if we translate the input by a small amount, the values of most of the pooled outputs do not change. See figure for an example of how this works. 9.8 Invariance to local translation can be a very useful property if we care more about whether some feature is present than exactly where it is. For example, when determining whether an image contains a face, we need not know the location of the eyes with pixel-perfect accuracy, we just need to know that there is an eye on the left side of the face and an eye on the right side of the face. In other contexts, it is more important to preserve the location of a feature. For example, if we want to find a corner defined by two edges meeting at a specific orientation, we need to preserve the location of the edges well enough to test whether they meet. The use of pooling can be viewed as adding an infinitely strong prior that the function the layer learns must be invariant to small translations. When this assumption is correct, it can greatly improve the statistical efficiency of the network. Pooling over spatial regions produces invariance to translation, but if we pool over the outputs of separately parametrized convolutions, the features can learn which transformations to become invariant to (see figure ). 9.9 Because pooling summarizes the responses over a whole neighborhood, it is possible to use fewer pooling units than detector units, by reporting summary statistics for pooling regions spaced k pixels apart rather than 1 pixel apart. See figure for an example. This improves the computational efficiency of the 9.10 network because the next layer has roughly k times fewer inputs to process. When the number of parameters in the next layer is a function of its input size (such as when the next layer is fully connected and based on matrix multiplication) this reduction in the input size can also result in improved statistical efficiency and reduced memory requirements for storing the parameters. For many tasks, pooling is essential for handling inputs of varying size. For example, if we want to classify images of variable size, the input to the classification layer must have a fixed size. This is usually accomplished by varying the size of an offset between pooling regions so that the classification layer always receives the same number of summary statistics regardless of the input size. For example, the final pooling layer of the network may be defined to output four sets of summary statistics, one for each quadrant of an image, regardless of the image size. 342
  • 359. CHAPTER 9. CONVOLUTIONAL NETWORKS 0.1 1. 0.2 1. 1. 1. 0.1 0.2 ... ... ... ... 0.3 0.1 1. 1. 0.3 1. 0.2 1. ... ... ... ... DETECTOR STAGE POOLING STAGE POOLING STAGE DETECTOR STAGE Figure 9.8: Max pooling introduces invariance. (Top)A view of the middle of the output of a convolutional layer. The bottom row shows outputs of the nonlinearity. The top row shows the outputs of max pooling, with a stride of one pixel between pooling regions and a pooling region width of three pixels. A view of the same network, after (Bottom) the input has been shifted to the right by one pixel. Every value in the bottom row has changed, but only half of the values in the top row have changed, because the max pooling units are only sensitive to the maximum value in the neighborhood, not its exact location. 343
  • 360. CHAPTER 9. CONVOLUTIONAL NETWORKS Large response in pooling unit Large response in pooling unit Large response in detector unit 1 Large response in detector unit 3 Figure 9.9: Example of learned invariances: A pooling unit that pools over multiple features that are learned with separate parameters can learn to be invariant to transformations of the input. Here we show how a set of three learned filters and a max pooling unit can learn to become invariant to rotation. All three filters are intended to detect a hand-written 5. Each filter attempts to match a slightly different orientation of the 5. When a 5 appears in the input, the corresponding filter will match it and cause a large activation in a detector unit. The max pooling unit then has a large activation regardless of which detector unit was activated. We show here how the network processes two different inputs, resulting in two different detector units being activated. The effect on the pooling unit is roughly the same either way. This principle is leveraged by maxout networks (Goodfellow et al., 2013a) and other convolutional networks. Max pooling over spatial positions is naturally invariant to translation; this multi-channel approach is only necessary for learning other transformations. 0.1 1. 0.2 1. 0.2 0.1 0.1 0.0 0.1 Figure 9.10: Pooling with downsampling. Here we use max-pooling with a pool width of three and a stride between pools of two. This reduces the representation size by a factor of two, which reduces the computational and statistical burden on the next layer. Note that the rightmost pooling region has a smaller size, but must be included if we do not want to ignore some of the detector units. 344
  • 361. CHAPTER 9. CONVOLUTIONAL NETWORKS Some theoretical work gives guidance as to which kinds of pooling one should use in various situations ( , ). It is also possible to dynamically Boureau et al. 2010 pool features together, for example, by running a clustering algorithm on the locations of interesting features ( , ). This approach yields a Boureau et al. 2011 different set of pooling regions for each image. Another approach is to learn a single pooling structure that is then applied to all images ( , ). Jia et al. 2012 Pooling can complicate some kinds of neural network architectures that use top-down information, such as Boltzmann machines and autoencoders. These issues will be discussed further when we present these types of networks in part . III Pooling in convolutional Boltzmann machines is presented in section . The 20.6 inverse-like operations on pooling units needed in some differentiable networks will be covered in section . 20.10.6 Some examples of complete convolutional network architectures for classification using convolution and pooling are shown in figure . 9.11 9.4 Convolution and Pooling as an Infinitely Strong Prior Recall the concept of a prior probability distribution from section . This is 5.2 a probability distribution over the parameters of a model that encodes our beliefs about what models are reasonable, before we have seen any data. Priors can be considered weak or strong depending on how concentrated the probability density in the prior is. A weak prior is a prior distribution with high entropy, such as a Gaussian distribution with high variance. Such a prior allows the data to move the parameters more or less freely. A strong prior has very low entropy, such as a Gaussian distribution with low variance. Such a prior plays a more active role in determining where the parameters end up. An infinitely strong prior places zero probability on some parameters and says that these parameter values are completely forbidden, regardless of how much support the data gives to those values. We can imagine a convolutional net as being similar to a fully connected net, but with an infinitely strong prior over its weights. This infinitely strong prior says that the weights for one hidden unit must be identical to the weights of its neighbor, but shifted in space. The prior also says that the weights must be zero, except for in the small, spatially contiguous receptive field assigned to that hidden unit. Overall, we can think of the use of convolution as introducing an infinitely strong prior probability distribution over the parameters of a layer. This prior 345
  • 362. CHAPTER 9. CONVOLUTIONAL NETWORKS Input image: 256x256x3 Output of convolution + ReLU: 256x256x64 Output of pooling with stride 4: 64x64x64 Output of convolution + ReLU: 64x64x64 Output of pooling with stride 4: 16x16x64 Output of reshape to vector: 16,384 units Output of matrix multiply: 1,000 units Output of softmax: 1,000 class probabilities Input image: 256x256x3 Output of convolution + ReLU: 256x256x64 Output of pooling with stride 4: 64x64x64 Output of convolution + ReLU: 64x64x64 Output of pooling to 3x3 grid: 3x3x64 Output of reshape to vector: 576 units Output of matrix multiply: 1,000 units Output of softmax: 1,000 class probabilities Input image: 256x256x3 Output of convolution + ReLU: 256x256x64 Output of pooling with stride 4: 64x64x64 Output of convolution + ReLU: 64x64x64 Output of convolution: 16x16x1,000 Output of average pooling: 1x1x1,000 Output of softmax: 1,000 class probabilities Output of pooling with stride 4: 16x16x64 Figure 9.11: Examples of architectures for classification with convolutional networks. The specific strides and depths used in this figure are not advisable for real use; they are designed to be very shallow in order to fit onto the page. Real convolutional networks also often involve significant amounts of branching, unlike the chain structures used here for simplicity. (Left)A convolutional network that processes a fixed image size. After alternating between convolution and pooling for a few layers, the tensor for the convolutional feature map is reshaped to flatten out the spatial dimensions. The rest of the network is an ordinary feedforward network classifier, as described in chapter . 6 (Center)A convolutional network that processes a variable-sized image, but still maintains a fully connected section. This network uses a pooling operation with variably-sized pools but a fixed number of pools, in order to provide a fixed-size vector of 576 units to the fully connected portion of the network. A convolutional network that does not (Right) have any fully connected weight layer. Instead, the last convolutional layer outputs one feature map per class. The model presumably learns a map of how likely each class is to occur at each spatial location. Averaging a feature map down to a single value provides the argument to the softmax classifier at the top. 346
  • 363. CHAPTER 9. CONVOLUTIONAL NETWORKS says that the function the layer should learn contains only local interactions and is equivariant to translation. Likewise, the use of pooling is an infinitely strong prior that each unit should be invariant to small translations. Of course, implementing a convolutional net as a fully connected net with an infinitely strong prior would be extremely computationally wasteful. But thinking of a convolutional net as a fully connected net with an infinitely strong prior can give us some insights into how convolutional nets work. One key insight is that convolution and pooling can cause underfitting. Like any prior, convolution and pooling are only useful when the assumptions made by the prior are reasonably accurate. If a task relies on preserving precise spatial information, then using pooling on all features can increase the training error. Some convolutional network architectures ( , ) are designed to Szegedy et al. 2014a use pooling on some channels but not on other channels, in order to get both highly invariant features and features that will not underfit when the translation invariance prior is incorrect. When a task involves incorporating information from very distant locations in the input, then the prior imposed by convolution may be inappropriate. Another key insight from this view is that we should only compare convolu- tional models to other convolutional models in benchmarks of statistical learning performance. Models that do not use convolution would be able to learn even if we permuted all of the pixels in the image. For many image datasets, there are separate benchmarks for models that are permutation invariant and must discover the concept of topology via learning, and models that have the knowledge of spatial relationships hard-coded into them by their designer. 9.5 Variants of the Basic Convolution Function When discussing convolution in the context of neural networks, we usually do not refer exactly to the standard discrete convolution operation as it is usually understood in the mathematical literature. The functions used in practice differ slightly. Here we describe these differences in detail, and highlight some useful properties of the functions used in neural networks. First, when we refer to convolution in the context of neural networks, we usually actually mean an operation that consists of many applications of convolution in parallel. This is because convolution with a single kernel can only extract one kind of feature, albeit at many spatial locations. Usually we want each layer of our network to extract many kinds of features, at many locations. 347
  • 364. CHAPTER 9. CONVOLUTIONAL NETWORKS Additionally, the input is usually not just a grid of real values. Rather, it is a grid of vector-valued observations. For example, a color image has a red, green and blue intensity at each pixel. In a multilayer convolutional network, the input to the second layer is the output of the first layer, which usually has the output of many different convolutions at each position. When working with images, we usually think of the input and output of the convolution as being 3-D tensors, with one index into the different channels and two indices into the spatial coordinates of each channel. Software implementations usually work in batch mode, so they will actually use 4-D tensors, with the fourth axis indexing different examples in the batch, but we will omit the batch axis in our description here for simplicity. Because convolutional networks usually use multi-channel convolution, the linear operations they are based on are not guaranteed to be commutative, even if kernel-flipping is used. These multi-channel operations are only commutative if each operation has the same number of output channels as input channels. Assume we have a 4-D kernel tensor K with element Ki,j,k,l giving the connection strength between a unit in channel i of the output and a unit in channel j of the input, with an offset of k rows and l columns between the output unit and the input unit. Assume our input consists of observed data V with element Vi,j,k giving the value of the input unit within channel i at row j and column k. Assume our output consists of Z with the same format as V. If Z is produced by convolving K across without flipping , then V K Zi,j,k =  l,m,n Vl,j m ,k n + −1 + −1Ki,l,m,n (9.7) where the summation over l, m and n is over all values for which the tensor indexing operations inside the summation is valid. In linear algebra notation, we index into arrays using a for the first entry. This necessitates the 1 −1 in the above formula. Programming languages such as C and Python index starting from , rendering 0 the above expression even simpler. We may want to skip over some positions of the kernel in order to reduce the computational cost (at the expense of not extracting our features as finely). We can think of this as downsampling the output of the full convolution function. If we want to sample only every s pixels in each direction in the output, then we can define a downsampled convolution function such that c Zi,j,k = ( ) c K V , ,s i,j,k =  l,m,n  Vl, j s m, k s n ( − × 1) + ( − × 1) + Ki,l,m,n  . (9.8) We refer to s as the stride of this downsampled convolution. It is also possible 348
  • 365. CHAPTER 9. CONVOLUTIONAL NETWORKS to define a separate stride for each direction of motion. See figure for an 9.12 illustration. One essential feature of any convolutional network implementation is the ability to implicitly zero-pad the input V in order to make it wider. Without this feature, the width of the representation shrinks by one pixel less than the kernel width at each layer. Zero padding the input allows us to control the kernel width and the size of the output independently. Without zero padding, we are forced to choose between shrinking the spatial extent of the network rapidly and using small kernels—both scenarios that significantly limit the expressive power of the network. See figure for an example. 9.13 Three special cases of the zero-padding setting are worth mentioning. One is the extreme case in which no zero-padding is used whatsoever, and the convolution kernel is only allowed to visit positions where the entire kernel is contained entirely within the image. In MATLAB terminology, this is called valid convolution. In this case, all pixels in the output are a function of the same number of pixels in the input, so the behavior of an output pixel is somewhat more regular. However, the size of the output shrinks at each layer. If the input image has width m and the kernel has width k, the output will be of width m k − + 1. The rate of this shrinkage can be dramatic if the kernels used are large. Since the shrinkage is greater than 0, it limits the number of convolutional layers that can be included in the network. As layers are added, the spatial dimension of the network will eventually drop to 1 × 1, at which point additional layers cannot meaningfully be considered convolutional. Another special case of the zero-padding setting is when just enough zero-padding is added to keep the size of the output equal to the size of the input. MATLAB calls this same convolution. In this case, the network can contain as many convolutional layers as the available hardware can support, since the operation of convolution does not modify the architectural possibilities available to the next layer. However, the input pixels near the border influence fewer output pixels than the input pixels near the center. This can make the border pixels somewhat underrepresented in the model. This motivates the other extreme case, which MATLAB refers to as full convolution, in which enough zeroes are added for every pixel to be visited k times in each direction, resulting in an output image of width m + k − 1. In this case, the output pixels near the border are a function of fewer pixels than the output pixels near the center. This can make it difficult to learn a single kernel that performs well at all positions in the convolutional feature map. Usually the optimal amount of zero padding (in terms of test set classification accuracy) lies somewhere between “valid” and “same” convolution. 349
  • 366. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s1 s1 s2 s2 x4 x4 x5 x5 s3 s3 x1 x1 x2 x2 x3 x3 z2 z2 z1 z1 z3 z3 x4 x4 z4 z4 x5 x5 z5 z5 s1 s1 s2 s2 s3 s3 Strided convolution Downsampling Convolution Figure 9.12: Convolution with a stride. In this example, we use a stride of two. (Top)Convolution with a stride length of two implemented in a single operation. (Bot- tom)Convolution with a stride greater than one pixel is mathematically equivalent to convolution with unit stride followed by downsampling. Obviously, the two-step approach involving downsampling is computationally wasteful, because it computes many values that are then discarded. 350
  • 367. CHAPTER 9. CONVOLUTIONAL NETWORKS ... ... ... ... ... ... ... ... ... Figure 9.13: The effect of zero padding on network size: Consider a convolutional network with a kernel of width six at every layer. In this example, we do not use any pooling, so only the convolution operation itself shrinks the network size. (Top)In this convolutional network, we do not use any implicit zero padding. This causes the representation to shrink by five pixels at each layer. Starting from an input of sixteen pixels, we are only able to have three convolutional layers, and the last layer does not ever move the kernel, so arguably only two of the layers are truly convolutional. The rate of shrinking can be mitigated by using smaller kernels, but smaller kernels are less expressive and some shrinking is inevitable in this kind of architecture. By adding five implicit zeroes (Bottom) to each layer, we prevent the representation from shrinking with depth. This allows us to make an arbitrarily deep convolutional network. 351
  • 368. CHAPTER 9. CONVOLUTIONAL NETWORKS In some cases, we do not actually want to use convolution, but rather locally connected layers ( , , ). In this case, the adjacency matrix in the LeCun 1986 1989 graph of our MLP is the same, but every connection has its own weight, specified by a 6-D tensor W. The indices into W are respectively: i, the output channel, j, the output row, k, the output column, l, the input channel, m, the row offset within the input, and n, the column offset within the input. The linear part of a locally connected layer is then given by Zi,j,k =  l,m,n [Vl,j m ,k n + −1 + −1wi,j,k,l,m,n]. (9.9) This is sometimes also called unshared convolution, because it is a similar oper- ation to discrete convolution with a small kernel, but without sharing parameters across locations. Figure compares local connections, convolution, and full 9.14 connections. Locally connected layers are useful when we know that each feature should be a function of a small part of space, but there is no reason to think that the same feature should occur across all of space. For example, if we want to tell if an image is a picture of a face, we only need to look for the mouth in the bottom half of the image. It can also be useful to make versions of convolution or locally connected layers in which the connectivity is further restricted, for example to constrain each output channel i to be a function of only a subset of the input channels l. A common way to do this is to make the first m output channels connect to only the first n input channels, the second m output channels connect to only the second n input channels, and so on. See figure for an example. Modeling interactions 9.15 between few channels allows the network to have fewer parameters in order to reduce memory consumption and increase statistical efficiency, and also reduces the amount of computation needed to perform forward and back-propagation. It accomplishes these goals without reducing the number of hidden units. Tiled convolution ( , ; , ) offers a com- Gregor and LeCun 2010a Le et al. 2010 promise between a convolutional layer and a locally connected layer. Rather than learning a separate set of weights at spatial location, we learn a set of kernels every that we rotate through as we move through space. This means that immediately neighboring locations will have different filters, like in a locally connected layer, but the memory requirements for storing the parameters will increase only by a factor of the size of this set of kernels, rather than the size of the entire output feature map. See figure for a comparison of locally connected layers, tiled 9.16 convolution, and standard convolution. 352
  • 369. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 s1 s1 s3 s3 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 a b a b a b a b a a b c d e f g h i x4 x4 x3 x3 s4 s4 s2 s2 Figure 9.14: Comparison of local connections, convolution, and full connections. (Top)A locally connected layer with a patch size of two pixels. Each edge is labeled with a unique letter to show that each edge is associated with its own weight parameter. (Center)A convolutional layer with a kernel width of two pixels. This model has exactly the same connectivity as the locally connected layer. The difference lies not in which units interact with each other, but in how the parameters are shared. The locally connected layer has no parameter sharing. The convolutional layer uses the same two weights repeatedly across the entire input, as indicated by the repetition of the letters labeling each edge. (Bottom)A fully connected layer resembles a locally connected layer in the sense that each edge has its own parameter (there are too many to label explicitly with letters in this diagram). However, it does not have the restricted connectivity of the locally connected layer. 353
  • 370. CHAPTER 9. CONVOLUTIONAL NETWORKS Input Tensor Output Tensor Spatial coordinates Channel coordinates Figure 9.15: A convolutional network with the first two output channels connected to only the first two input channels, and the second two output channels connected to only the second two input channels. 354
  • 371. CHAPTER 9. CONVOLUTIONAL NETWORKS x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 a b a b a b a b a a b c d e f g h i x1 x1 x2 x2 x3 x3 s2 s2 s1 s1 s3 s3 x4 x4 s4 s4 x5 x5 s5 s5 a b c d a b c d a Figure 9.16: A comparison of locally connected layers, tiled convolution, and standard convolution. All three have the same sets of connections between units, when the same size of kernel is used. This diagram illustrates the use of a kernel that is two pixels wide. The differences between the methods lies in how they share parameters. (Top)A locally connected layer has no sharing at all. We indicate that each connection has its own weight by labeling each connection with a unique letter. Tiled convolution has a set of (Center) t different kernels. Here we illustrate the case of t = 2. One of these kernels has edges labeled “a” and “b,” while the other has edges labeled “c” and “d.” Each time we move one pixel to the right in the output, we move on to using a different kernel. This means that, like the locally connected layer, neighboring units in the output have different parameters. Unlike the locally connected layer, after we have gone through allt available kernels, we cycle back to the first kernel. If two output units are separated by a multiple of t steps, then they share parameters. Traditional convolution is equivalent to tiled (Bottom) convolution with t = 1. There is only one kernel and it is applied everywhere, as indicated in the diagram by using the kernel with weights labeled “a” and “b” everywhere. 355
  • 372. CHAPTER 9. CONVOLUTIONAL NETWORKS To define tiled convolution algebraically, let k be a 6-D tensor, where two of the dimensions correspond to different locations in the output map. Rather than having a separate index for each location in the output map, output locations cycle through a set of t different choices of kernel stack in each direction. If t is equal to the output width, this is the same as a locally connected layer. Zi,j,k =  l,m,n Vl,j m ,k n + −1 + −1Ki,l,m,n,j t ,k t % +1 % +1, (9.10) where is the modulo operation, with % t%t = 0 ( , t + 1)%t = 1, etc. It is straightforward to generalize this equation to use a different tiling range for each dimension. Both locally connected layers and tiled convolutional layers have an interesting interaction with max-pooling: the detector units of these layers are driven by different filters. If these filters learn to detect different transformed versions of the same underlying features, then the max-pooled units become invariant to the learned transformation (see figure ). Convolutional layers are hard-coded to be 9.9 invariant specifically to translation. Other operations besides convolution are usually necessary to implement a convolutional network. To perform learning, one must be able to compute the gradient with respect to the kernel, given the gradient with respect to the outputs. In some simple cases, this operation can be performed using the convolution operation, but many cases of interest, including the case of stride greater than 1, do not have this property. Recall that convolution is a linear operation and can thus be described as a matrix multiplication (if we first reshape the input tensor into a flat vector). The matrix involved is a function of the convolution kernel. The matrix is sparse and each element of the kernel is copied to several elements of the matrix. This view helps us to derive some of the other operations needed to implement a convolutional network. Multiplication by the transpose of the matrix defined by convolution is one such operation. This is the operation needed to back-propagate error derivatives through a convolutional layer, so it is needed to train convolutional networks that have more than one hidden layer. This same operation is also needed if we wish to reconstruct the visible units from the hidden units ( , ). Simard et al. 1992 Reconstructing the visible units is an operation commonly used in the models described in part of this book, such as autoencoders, RBMs, and sparse coding. III Transpose convolution is necessary to construct convolutional versions of those models. Like the kernel gradient operation, this input gradient operation can be 356
  • 373. CHAPTER 9. CONVOLUTIONAL NETWORKS implemented using a convolution in some cases, but in the general case requires a third operation to be implemented. Care must be taken to coordinate this transpose operation with the forward propagation. The size of the output that the transpose operation should return depends on the zero padding policy and stride of the forward propagation operation, as well as the size of the forward propagation’s output map. In some cases, multiple sizes of input to forward propagation can result in the same size of output map, so the transpose operation must be explicitly told what the size of the original input was. These three operations—convolution, backprop from output to weights, and backprop from output to inputs—are sufficient to compute all of the gradients needed to train any depth of feedforward convolutional network, as well as to train convolutional networks with reconstruction functions based on the transpose of convolution. See ( ) for a full derivation of the equations in the Goodfellow 2010 fully general multi-dimensional, multi-example case. To give a sense of how these equations work, we present the two dimensional, single example version here. Suppose we want to train a convolutional network that incorporates strided convolution of kernel stack K applied to multi-channel image V with stride s as defined by c(K V , ,s) as in equation . Suppose we want to minimize some loss 9.8 function J(V K , ). During forward propagation, we will need to use c itself to output Z, which is then propagated through the rest of the network and used to compute the cost function J. During back-propagation, we will receive a tensor G such that Gi,j,k = ∂ ∂Zi,j,k J , . (V K) To train the network, we need to compute the derivatives with respect to the weights in the kernel. To do so, we can use a function g , ,s (G V )i,j,k,l = ∂ ∂Ki,j,k,l J , (V K) =  m,n Gi,m,nVj, m s k, n s l ( − × 1) + ( − × 1) + . (9.11) If this layer is not the bottom layer of the network, we will need to compute the gradient with respect to V in order to back-propagate the error farther down. To do so, we can use a function h , , s (K G )i,j,k = ∂ ∂Vi,j,k J , (V K) (9.12) =  l,m s.t. ( 1) + = l− ×s m j  n,p s.t. ( 1) + = n− ×s p k  q Kq,i,m,pGq,l,n. (9.13) Autoencoder networks, described in chapter , are feedforward networks 14 trained to copy their input to their output. A simple example is the PCA algorithm, 357
  • 374. CHAPTER 9. CONVOLUTIONAL NETWORKS that copies its input x to an approximate reconstruction r using the function W Wx. It is common for more general autoencoders to use multiplication by the transpose of the weight matrix just as PCA does. To make such models convolutional, we can use the function h to perform the transpose of the convolution operation. Suppose we have hidden units H in the same format as Z and we define a reconstruction R K H = ( h , ,s . ) (9.14) In order to train the autoencoder, we will receive the gradient with respect to R as a tensor E. To train the decoder, we need to obtain the gradient with respect to K. This is given by g(H E , , s). To train the encoder, we need to obtain the gradient with respect to H. This is given by c(K E , , s). It is also possible to differentiate through g using c and h, but these operations are not needed for the back-propagation algorithm on any standard network architectures. Generally, we do not use only a linear operation in order to transform from the inputs to the outputs in a convolutional layer. We generally also add some bias term to each output before applying the nonlinearity. This raises the question of how to share parameters among the biases. For locally connected layers it is natural to give each unit its own bias, and for tiled convolution, it is natural to share the biases with the same tiling pattern as the kernels. For convolutional layers, it is typical to have one bias per channel of the output and share it across all locations within each convolution map. However, if the input is of known, fixed size, it is also possible to learn a separate bias at each location of the output map. Separating the biases may slightly reduce the statistical efficiency of the model, but also allows the model to correct for differences in the image statistics at different locations. For example, when using implicit zero padding, detector units at the edge of the image receive less total input and may need larger biases. 9.6 Structured Outputs Convolutional networks can be used to output a high-dimensional, structured object, rather than just predicting a class label for a classification task or a real value for a regression task. Typically this object is just a tensor, emitted by a standard convolutional layer. For example, the model might emit a tensor S, where Si,j,k is the probability that pixel (j, k) of the input to the network belongs to class i. This allows the model to label every pixel in an image and draw precise masks that follow the outlines of individual objects. One issue that often comes up is that the output plane can be smaller than the 358
  • 375. CHAPTER 9. CONVOLUTIONAL NETWORKS Ŷ (1) Ŷ (1) Ŷ (2) Ŷ (2) Ŷ (3) Ŷ (3) H(1) H(1) H(2) H(2) H(3) H(3) X X U U U V V V W W Figure 9.17: An example of a recurrent convolutional network for pixel labeling. The input is an image tensor , with axes corresponding to image rows, image columns, and X channels (red, green, blue). The goal is to output a tensor of labelsŶ , with a probability distribution over labels for each pixel. This tensor has axes corresponding to image rows, image columns, and the different classes. Rather than outputting ˆ Y in a single shot, the recurrent network iteratively refines its estimate Ŷ by using a previous estimate of Ŷ as input for creating a new estimate. The same parameters are used for each updated estimate, and the estimate can be refined as many times as we wish. The tensor of convolution kernels U is used on each step to compute the hidden representation given the input image. The kernel tensor V is used to produce an estimate of the labels given the hidden values. On all but the first step, the kernels W are convolved over Ŷ to provide input to the hidden layer. On the first time step, this term is replaced by zero. Because the same parameters are used on each step, this is an example of a recurrent network, as described in chapter . 10 input plane, as shown in figure . In the kinds of architectures typically used for 9.13 classification of a single object in an image, the greatest reduction in the spatial dimensions of the network comes from using pooling layers with large stride. In order to produce an output map of similar size as the input, one can avoid pooling altogether ( , ). Another strategy is to simply emit a lower-resolution Jain et al. 2007 grid of labels ( , , ). Finally, in principle, one could Pinheiro and Collobert 2014 2015 use a pooling operator with unit stride. One strategy for pixel-wise labeling of images is to produce an initial guess of the image labels, then refine this initial guess using the interactions between neighboring pixels. Repeating this refinement step several times corresponds to using the same convolutions at each stage, sharing weights between the last layers of the deep net ( , ). This makes the sequence of computations performed Jain et al. 2007 by the successive convolutional layers with weights shared across layers a particular kind of recurrent network ( , , ). Figure shows Pinheiro and Collobert 2014 2015 9.17 the architecture of such a recurrent convolutional network. 359
  • 376. CHAPTER 9. CONVOLUTIONAL NETWORKS Once a prediction for each pixel is made, various methods can be used to further process these predictions in order to obtain a segmentation of the image into regions ( , ; Briggman et al. 2009 Turaga 2010 Farabet 2013 et al., ; et al., ). The general idea is to assume that large groups of contiguous pixels tend to be associated with the same label. Graphical models can describe the probabilistic relationships between neighboring pixels. Alternatively, the convolutional network can be trained to maximize an approximation of the graphical model training objective ( , ; , ). Ning et al. 2005 Thompson et al. 2014 9.7 Data Types The data used with a convolutional network usually consists of several channels, each channel being the observation of a different quantity at some point in space or time. See table for examples of data types with different dimensionalities 9.1 and number of channels. For an example of convolutional networks applied to video, see Chen et al. ( ). 2010 So far we have discussed only the case where every example in the train and test data has the same spatial dimensions. One advantage to convolutional networks is that they can also process inputs with varying spatial extents. These kinds of input simply cannot be represented by traditional, matrix multiplication-based neural networks. This provides a compelling reason to use convolutional networks even when computational cost and overfitting are not significant issues. For example, consider a collection of images, where each image has a different width and height. It is unclear how to model such inputs with a weight matrix of fixed size. Convolution is straightforward to apply; the kernel is simply applied a different number of times depending on the size of the input, and the output of the convolution operation scales accordingly. Convolution may be viewed as matrix multiplication; the same convolution kernel induces a different size of doubly block circulant matrix for each size of input. Sometimes the output of the network is allowed to have variable size as well as the input, for example if we want to assign a class label to each pixel of the input. In this case, no further design work is necessary. In other cases, the network must produce some fixed-size output, for example if we want to assign a single class label to the entire image. In this case we must make some additional design steps, like inserting a pooling layer whose pooling regions scale in size proportional to the size of the input, in order to maintain a fixed number of pooled outputs. Some examples of this kind of strategy are shown in figure . 9.11 360
  • 377. CHAPTER 9. CONVOLUTIONAL NETWORKS Single channel Multi-channel 1-D Audio waveform: The axis we convolve over corresponds to time. We discretize time and measure the amplitude of the waveform once per time step. Skeleton animation data: Anima- tions of 3-D computer-rendered characters are generated by alter- ing the pose of a “skeleton” over time. At each point in time, the pose of the character is described by a specification of the angles of each of the joints in the charac- ter’s skeleton. Each channel in the data we feed to the convolu- tional model represents the angle about one axis of one joint. 2-D Audio data that has been prepro- cessed with a Fourier transform: We can transform the audio wave- form into a 2D tensor with dif- ferent rows corresponding to dif- ferent frequencies and different columns corresponding to differ- ent points in time. Using convolu- tion in the time makes the model equivariant to shifts in time. Us- ing convolution across the fre- quency axis makes the model equivariant to frequency, so that the same melody played in a dif- ferent octave produces the same representation but at a different height in the network’s output. Color image data: One channel contains the red pixels, one the green pixels, and one the blue pixels. The convolution kernel moves over both the horizontal and vertical axes of the image, conferring translation equivari- ance in both directions. 3-D Volumetric data: A common source of this kind of data is med- ical imaging technology, such as CT scans. Color video data: One axis corre- sponds to time, one to the height of the video frame, and one to the width of the video frame. Table 9.1: Examples of different formats of data that can be used with convolutional networks. 361
  • 378. CHAPTER 9. CONVOLUTIONAL NETWORKS Note that the use of convolution for processing variable sized inputs only makes sense for inputs that have variable size because they contain varying amounts of observation of the same kind of thing—different lengths of recordings over time, different widths of observations over space, etc. Convolution does not make sense if the input has variable size because it can optionally include different kinds of observations. For example, if we are processing college applications, and our features consist of both grades and standardized test scores, but not every applicant took the standardized test, then it does not make sense to convolve the same weights over both the features corresponding to the grades and the features corresponding to the test scores. 9.8 Efficient Convolution Algorithms Modern convolutional network applications often involve networks containing more than one million units. Powerful implementations exploiting parallel computation resources, as discussed in section , are essential. However, in many cases it 12.1 is also possible to speed up convolution by selecting an appropriate convolution algorithm. Convolution is equivalent to converting both the input and the kernel to the frequency domain using a Fourier transform, performing point-wise multiplication of the two signals, and converting back to the time domain using an inverse Fourier transform. For some problem sizes, this can be faster than the naive implementation of discrete convolution. When a d-dimensional kernel can be expressed as the outer product of d vectors, one vector per dimension, the kernel is called separable. When the kernel is separable, naive convolution is inefficient. It is equivalent to compose d one-dimensional convolutions with each of these vectors. The composed approach is significantly faster than performing one d-dimensional convolution with their outer product. The kernel also takes fewer parameters to represent as vectors. If the kernel is w elements wide in each dimension, then naive multidimensional convolution requires O(wd ) runtime and parameter storage space, while separable convolution requires O(w d × ) runtime and parameter storage space. Of course, not every convolution can be represented in this way. Devising faster ways of performing convolution or approximate convolution without harming the accuracy of the model is an active area of research. Even tech- niques that improve the efficiency of only forward propagation are useful because in the commercial setting, it is typical to devote more resources to deployment of a network than to its training. 362
  • 379. CHAPTER 9. CONVOLUTIONAL NETWORKS 9.9 Random or Unsupervised Features Typically, the most expensive part of convolutional network training is learning the features. The output layer is usually relatively inexpensive due to the small number of features provided as input to this layer after passing through several layers of pooling. When performing supervised training with gradient descent, every gradient step requires a complete run of forward propagation and backward propagation through the entire network. One way to reduce the cost of convolutional network training is to use features that are not trained in a supervised fashion. There are three basic strategies for obtaining convolution kernels without supervised training. One is to simply initialize them randomly. Another is to design them by hand, for example by setting each kernel to detect edges at a certain orientation or scale. Finally, one can learn the kernels with an unsupervised criterion. For example, ( ) apply Coates et al. 2011 k-means clustering to small image patches, then use each learned centroid as a convolution kernel. Part III describes many more unsupervised learning approaches. Learning the features with an unsupervised criterion allows them to be determined separately from the classifier layer at the top of the architecture. One can then extract the features for the entire training set just once, essentially constructing a new training set for the last layer. Learning the last layer is then typically a convex optimization problem, assuming the last layer is something like logistic regression or an SVM. Random filters often work surprisingly well in convolutional networks (Jarrett et al. et al. et al. , ; 2009 Saxe , ; 2011 Pinto , ; 2011 Cox and Pinto 2011 Saxe , ). et al. ( ) showed that layers consisting of convolution following by pooling naturally 2011 become frequency selective and translation invariant when assigned random weights. They argue that this provides an inexpensive way to choose the architecture of a convolutional network: first evaluate the performance of several convolutional network architectures by training only the last layer, then take the best of these architectures and train the entire architecture using a more expensive approach. An intermediate approach is to learn the features, but using methods that do not require full forward and back-propagation at every gradient step. As with multilayer perceptrons, we use greedy layer-wise pretraining, to train the first layer in isolation, then extract all features from the first layer only once, then train the second layer in isolation given those features, and so on. Chapter has described 8 how to perform supervised greedy layer-wise pretraining, and part extends this III to greedy layer-wise pretraining using an unsupervised criterion at each layer. The canonical example of greedy layer-wise pretraining of a convolutional model is the convolutional deep belief network ( , ). Convolutional networks offer Lee et al. 2009 363
  • 380. CHAPTER 9. CONVOLUTIONAL NETWORKS us the opportunity to take the pretraining strategy one step further than is possible with multilayer perceptrons. Instead of training an entire convolutional layer at a time, we can train a model of a small patch, as ( ) do with Coates et al. 2011 k-means. We can then use the parameters from this patch-based model to define the kernels of a convolutional layer. This means that it is possible to use unsupervised learning to train a convolutional network without ever using convolution during the training process. Using this approach, we can train very large models and incur a high computational cost only at inference time ( , ; , Ranzato et al. 2007b Jarrett et al. 2009 Kavukcuoglu 2010 Coates 2013 ; et al., ; et al., ). This approach was popular from roughly 2007–2013, when labeled datasets were small and computational power was more limited. Today, most convolutional networks are trained in a purely supervised fashion, using full forward and back-propagation through the entire network on each training iteration. As with other approaches to unsupervised pretraining, it remains difficult to tease apart the cause of some of the benefits seen with this approach. Unsupervised pretraining may offer some regularization relative to supervised training, or it may simply allow us to train much larger architectures due to the reduced computational cost of the learning rule. 9.10 The Neuroscientific Basis for Convolutional Net- works Convolutional networks are perhaps the greatest success story of biologically inspired artificial intelligence. Though convolutional networks have been guided by many other fields, some of the key design principles of neural networks were drawn from neuroscience. The history of convolutional networks begins with neuroscientific experiments long before the relevant computational models were developed. Neurophysiologists David Hubel and Torsten Wiesel collaborated for several years to determine many of the most basic facts about how the mammalian vision system works (Hubel and Wiesel 1959 1962 1968 , , , ). Their accomplishments were eventually recognized with a Nobel prize. Their findings that have had the greatest influence on contemporary deep learning models were based on recording the activity of individual neurons in cats. They observed how neurons in the cat’s brain responded to images projected in precise locations on a screen in front of the cat. Their great discovery was that neurons in the early visual system responded most strongly to very specific patterns of light, such as precisely oriented bars, but responded hardly at all to other patterns. 364
  • 381. CHAPTER 9. CONVOLUTIONAL NETWORKS Their work helped to characterize many aspects of brain function that are beyond the scope of this book. From the point of view of deep learning, we can focus on a simplified, cartoon view of brain function. In this simplified view, we focus on a part of the brain called V1, also known as the primary visual cortex. V1 is the first area of the brain that begins to perform significantly advanced processing of visual input. In this cartoon view, images are formed by light arriving in the eye and stimulating the retina, the light-sensitive tissue in the back of the eye. The neurons in the retina perform some simple preprocessing of the image but do not substantially alter the way it is represented. The image then passes through the optic nerve and a brain region called the lateral geniculate nucleus. The main role, as far as we are concerned here, of both of these anatomical regions is primarily just to carry the signal from the eye to V1, which is located at the back of the head. A convolutional network layer is designed to capture three properties of V1: 1. V1 is arranged in a spatial map. It actually has a two-dimensional structure mirroring the structure of the image in the retina. For example, light arriving at the lower half of the retina affects only the corresponding half of V1. Convolutional networks capture this property by having their features defined in terms of two dimensional maps. 2. V1 contains many simple cells. A simple cell’s activity can to some extent be characterized by a linear function of the image in a small, spatially localized receptive field. The detector units of a convolutional network are designed to emulate these properties of simple cells. 3. V1 also contains many complex cells. These cells respond to features that are similar to those detected by simple cells, but complex cells are invariant to small shifts in the position of the feature. This inspires the pooling units of convolutional networks. Complex cells are also invariant to some changes in lighting that cannot be captured simply by pooling over spatial locations. These invariances have inspired some of the cross-channel pooling strategies in convolutional networks, such as maxout units ( , ). Goodfellow et al. 2013a Though we know the most about V1, it is generally believed that the same basic principles apply to other areas of the visual system. In our cartoon view of the visual system, the basic strategy of detection followed by pooling is repeatedly applied as we move deeper into the brain. As we pass through multiple anatomical layers of the brain, we eventually find cells that respond to some specific concept and are invariant to many transformations of the input. These cells have been 365
  • 382. CHAPTER 9. CONVOLUTIONAL NETWORKS nicknamed “grandmother cells”—the idea is that a person could have a neuron that activates when seeing an image of their grandmother, regardless of whether she appears in the left or right side of the image, whether the image is a close-up of her face or zoomed out shot of her entire body, whether she is brightly lit, or in shadow, etc. These grandmother cells have been shown to actually exist in the human brain, in a region called the medial temporal lobe ( , ). Researchers Quiroga et al. 2005 tested whether individual neurons would respond to photos of famous individuals. They found what has come to be called the “Halle Berry neuron”: an individual neuron that is activated by the concept of Halle Berry. This neuron fires when a person sees a photo of Halle Berry, a drawing of Halle Berry, or even text containing the words “Halle Berry.” Of course, this has nothing to do with Halle Berry herself; other neurons responded to the presence of Bill Clinton, Jennifer Aniston, etc. These medial temporal lobe neurons are somewhat more general than modern convolutional networks, which would not automatically generalize to identifying a person or object when reading its name. The closest analog to a convolutional network’s last layer of features is a brain area called the inferotemporal cortex (IT). When viewing an object, information flows from the retina, through the LGN, to V1, then onward to V2, then V4, then IT. This happens within the first 100ms of glimpsing an object. If a person is allowed to continue looking at the object for more time, then information will begin to flow backwards as the brain uses top-down feedback to update the activations in the lower level brain areas. However, if we interrupt the person’s gaze, and observe only the firing rates that result from the first 100ms of mostly feedforward activation, then IT proves to be very similar to a convolutional network. Convolutional networks can predict IT firing rates, and also perform very similarly to (time limited) humans on object recognition tasks ( , ). DiCarlo 2013 That being said, there are many differences between convolutional networks and the mammalian vision system. Some of these differences are well known to computational neuroscientists, but outside the scope of this book. Some of these differences are not yet known, because many basic questions about how the mammalian vision system works remain unanswered. As a brief list: • The human eye is mostly very low resolution, except for a tiny patch called the fovea. The fovea only observes an area about the size of a thumbnail held at arms length. Though we feel as if we can see an entire scene in high resolution, this is an illusion created by the subconscious part of our brain, as it stitches together several glimpses of small areas. Most convolutional networks actually receive large full resolution photographs as input. The human brain makes 366
  • 383. CHAPTER 9. CONVOLUTIONAL NETWORKS several eye movements called saccades to glimpse the most visually salient or task-relevant parts of a scene. Incorporating similar attention mechanisms into deep learning models is an active research direction. In the context of deep learning, attention mechanisms have been most successful for natural language processing, as described in section . Several visual models 12.4.5.1 with foveation mechanisms have been developed but so far have not become the dominant approach (Larochelle and Hinton 2010 Denil 2012 , ; et al., ). • The human visual system is integrated with many other senses, such as hearing, and factors like our moods and thoughts. Convolutional networks so far are purely visual. • The human visual system does much more than just recognize objects. It is able to understand entire scenes including many objects and relationships between objects, and processes rich 3-D geometric information needed for our bodies to interface with the world. Convolutional networks have been applied to some of these problems but these applications are in their infancy. • Even simple brain areas like V1 are heavily impacted by feedback from higher levels. Feedback has been explored extensively in neural network models but has not yet been shown to offer a compelling improvement. • While feedforward IT firing rates capture much of the same information as convolutional network features, it is not clear how similar the intermediate computations are. The brain probably uses very different activation and pooling functions. An individual neuron’s activation probably is not well- characterized by a single linear filter response. A recent model of V1 involves multiple quadratic filters for each neuron ( , ). Indeed our Rust et al. 2005 cartoon picture of “simple cells” and “complex cells” might create a non- existent distinction; simple cells and complex cells might both be the same kind of cell but with their “parameters” enabling a continuum of behaviors ranging from what we call “simple” to what we call “complex.” It is also worth mentioning that neuroscience has told us relatively little about how to train convolutional networks. Model structures with parameter sharing across multiple spatial locations date back to early connectionist models of vision ( , ), but these models did not use the modern Marr and Poggio 1976 back-propagation algorithm and gradient descent. For example, the Neocognitron (Fukushima 1980 , ) incorporated most of the model architecture design elements of the modern convolutional network but relied on a layer-wise unsupervised clustering algorithm. 367
  • 384. CHAPTER 9. CONVOLUTIONAL NETWORKS Lang and Hinton 1988 ( ) introduced the use of back-propagation to train time-delay neural networks (TDNNs). To use contemporary terminology, TDNNs are one-dimensional convolutional networks applied to time series. Back- propagation applied to these models was not inspired by any neuroscientific observa- tion and is considered by some to be biologically implausible. Following the success of back-propagation-based training of TDNNs, ( , ) developed LeCun et al. 1989 the modern convolutional network by applying the same training algorithm to 2-D convolution applied to images. So far we have described how simple cells are roughly linear and selective for certain features, complex cells are more nonlinear and become invariant to some transformations of these simple cell features, and stacks of layers that alternate between selectivity and invariance can yield grandmother cells for very specific phenomena. We have not yet described precisely what these individual cells detect. In a deep, nonlinear network, it can be difficult to understand the function of individual cells. Simple cells in the first layer are easier to analyze, because their responses are driven by a linear function. In an artificial neural network, we can just display an image of the convolution kernel to see what the corresponding channel of a convolutional layer responds to. In a biological neural network, we do not have access to the weights themselves. Instead, we put an electrode in the neuron itself, display several samples of white noise images in front of the animal’s retina, and record how each of these samples causes the neuron to activate. We can then fit a linear model to these responses in order to obtain an approximation of the neuron’s weights. This approach is known as reverse correlation (Ringach and Shapley 2004 , ). Reverse correlation shows us that most V1 cells have weights that are described by Gabor functions. The Gabor function describes the weight at a 2-D point in the image. We can think of an image as being a function of 2-D coordinates, I(x, y). Likewise, we can think of a simple cell as sampling the image at a set of locations, defined by a set of x coordinates X and a set of y coordinates, Y, and applying weights that are also a function of the location, w(x, y). From this point of view, the response of a simple cell to an image is given by s I ( ) =  x∈X  y∈Y w x,y I x, y . ( ) ( ) (9.15) Specifically, takes the form of a Gabor function: w x,y ( ) w x,y α, β ( ; x, βy, f,φ, x0, y0, τ α ) = exp  −βxx2 − βyy2 cos(fx + ) φ , (9.16) where x = (x x − 0) cos( ) + ( τ y y − 0) sin( ) τ (9.17) 368
  • 385. CHAPTER 9. CONVOLUTIONAL NETWORKS and y = ( − x x − 0) sin( ) + ( τ y y − 0) cos( ) τ . (9.18) Here, α, βx, βy, f, φ, x0, y0, and τ are parameters that control the properties of the Gabor function. Figure shows some examples of Gabor functions with 9.18 different settings of these parameters. The parameters x0, y0, and τ define a coordinate system. We translate and rotate x and y to form x and y . Specifically, the simple cell will respond to image features centered at the point (x0, y0), and it will respond to changes in brightness as we move along a line rotated radians from the horizontal. τ Viewed as a function of x and y , the function w then responds to changes in brightness as we move along the x axis. It has two important factors: one is a Gaussian function and the other is a cosine function. The Gaussian factor α exp  −βx x2 − βyy2  can be seen as a gating term that ensures the simple cell will only respond to values near where x and y are both zero, in other words, near the center of the cell’s receptive field. The scaling factor α adjusts the total magnitude of the simple cell’s response, while βx and βy control how quickly its receptive field falls off. The cosine factor cos(fx +φ) controls how the simple cell responds to changing brightness along the x axis. The parameter f controls the frequency of the cosine and controls its phase offset. φ Altogether, this cartoon view of simple cells means that a simple cell responds to a specific spatial frequency of brightness in a specific direction at a specific location. Simple cells are most excited when the wave of brightness in the image has the same phase as the weights. This occurs when the image is bright where the weights are positive and dark where the weights are negative. Simple cells are most inhibited when the wave of brightness is fully out of phase with the weights—when the image is dark where the weights are positive and bright where the weights are negative. The cartoon view of a complex cell is that it computes the L2 norm of the 2-D vector containing two simple cells’ responses: c( I) =  s0( ) I 2 + s1( ) I 2. An important special case occurs when s1 has all of the same parameters as s0 except for φ, and φ is set such that s1 is one quarter cycle out of phase with s0. In this case, s0 and s1 form a quadrature pair. A complex cell defined in this way responds when the Gaussian reweighted image I(x, y) exp(−βx x2 −βyy2) contains a high amplitude sinusoidal wave with frequency f in direction τ near (x0, y0), regardless of the phase offset of this wave. In other words, the complex cell is invariant to small translations of the image in direction τ , or to negating the image 369
  • 386. CHAPTER 9. CONVOLUTIONAL NETWORKS Figure 9.18: Gabor functions with a variety of parameter settings. White indicates large positive weight, black indicates large negative weight, and the background gray corresponds to zero weight. (Left)Gabor functions with different values of the parameters that control the coordinate system: x0, y0, and τ. Each Gabor function in this grid is assigned a value of x0 and y0 proportional to its position in its grid, and τ is chosen so that each Gabor filter is sensitive to the direction radiating out from the center of the grid. For the other two plots, x0, y0, and τ are fixed to zero. Gabor functions with (Center) different Gaussian scale parameters βx and βy . Gabor functions are arranged in increasing width (decreasing βx) as we move left to right through the grid, and increasing height (decreasing βy) as we move top to bottom. For the other two plots, the β values are fixed to 1.5× the image width. Gabor functions with different sinusoid parameters (Right) f and φ. As we move top to bottom, f increases, and as we move left to right, φ increases. For the other two plots, is fixed to 0 and is fixed to 5 the image width. φ f × (replacing black with white and vice versa). Some of the most striking correspondences between neuroscience and machine learning come from visually comparing the features learned by machine learning models with those employed by V1. ( ) showed that Olshausen and Field 1996 a simple unsupervised learning algorithm, sparse coding, learns features with receptive fields similar to those of simple cells. Since then, we have found that an extremely wide variety of statistical learning algorithms learn features with Gabor-like functions when applied to natural images. This includes most deep learning algorithms, which learn these features in their first layer. Figure 9.19 shows some examples. Because so many different learning algorithms learn edge detectors, it is difficult to conclude that any specific learning algorithm is the “right” model of the brain just based on the features that it learns (though it can certainly be a bad sign if an algorithm does learn some sort of edge detector not when applied to natural images). These features are an important part of the statistical structure of natural images and can be recovered by many different approaches to statistical modeling. See Hyvärinen 2009 et al. ( ) for a review of the field of natural image statistics. 370
  • 387. CHAPTER 9. CONVOLUTIONAL NETWORKS Figure 9.19: Many machine learning algorithms learn features that detect edges or specific colors of edges when applied to natural images. These feature detectors are reminiscent of the Gabor functions known to be present in primary visual cortex. (Left)Weights learned by an unsupervised learning algorithm (spike and slab sparse coding) applied to small image patches. (Right)Convolution kernels learned by the first layer of a fully supervised convolutional maxout network. Neighboring pairs of filters drive the same maxout unit. 9.11 Convolutional Networks and the History of Deep Learning Convolutional networks have played an important role in the history of deep learning. They are a key example of a successful application of insights obtained by studying the brain to machine learning applications. They were also some of the first deep models to perform well, long before arbitrary deep models were considered viable. Convolutional networks were also some of the first neural networks to solve important commercial applications and remain at the forefront of commercial applications of deep learning today. For example, in the 1990s, the neural network research group at AT&T developed a convolutional network for reading checks ( , ). By the end of the 1990s, this system deployed LeCun et al. 1998b by NEC was reading over 10% of all the checks in the US. Later, several OCR and handwriting recognition systems based on convolutional nets were deployed by Microsoft ( , ). See chapter for more details on such applications Simard et al. 2003 12 and more modern applications of convolutional networks. See ( ) LeCun et al. 2010 for a more in-depth history of convolutional networks up to 2010. Convolutional networks were also used to win many contests. The current intensity of commercial interest in deep learning began when Krizhevsky et al. ( ) won the ImageNet object recognition challenge, but convolutional networks 2012 371
  • 388. CHAPTER 9. CONVOLUTIONAL NETWORKS had been used to win other machine learning and computer vision contests with less impact for years earlier. Convolutional nets were some of the first working deep networks trained with back-propagation. It is not entirely clear why convolutional networks succeeded when general back-propagation networks were considered to have failed. It may simply be that convolutional networks were more computationally efficient than fully connected networks, so it was easier to run multiple experiments with them and tune their implementation and hyperparameters. Larger networks also seem to be easier to train. With modern hardware, large fully connected networks appear to perform reasonably on many tasks, even when using datasets that were available and activation functions that were popular during the times when fully connected networks were believed not to work well. It may be that the primary barriers to the success of neural networks were psychological (practitioners did not expect neural networks to work, so they did not make a serious effort to use neural networks). Whatever the case, it is fortunate that convolutional networks performed well decades ago. In many ways, they carried the torch for the rest of deep learning and paved the way to the acceptance of neural networks in general. Convolutional networks provide a way to specialize neural networks to work with data that has a clear grid-structured topology and to scale such models to very large size. This approach has been the most successful on a two-dimensional, image topology. To process one-dimensional, sequential data, we turn next to another powerful specialization of the neural networks framework: recurrent neural networks. 372
  • 389. Chapter 10 Sequence Modeling: Recurrent and Recursive Nets Recurrent neural networks or RNNs ( , ) are a family of Rumelhart et al. 1986a neural networks for processing sequential data. Much as a convolutional network is a neural network that is specialized for processing a grid of values X such as an image, a recurrent neural network is a neural network that is specialized for processing a sequence of values x(1) , . . . , x( ) τ . Just as convolutional networks can readily scale to images with large width and height, and some convolutional networks can process images of variable size, recurrent networks can scale to much longer sequences than would be practical for networks without sequence-based specialization. Most recurrent networks can also process sequences of variable length. To go from multi-layer networks to recurrent networks, we need to take advan- tage of one of the early ideas found in machine learning and statistical models of the 1980s: sharing parameters across different parts of a model. Parameter sharing makes it possible to extend and apply the model to examples of different forms (different lengths, here) and generalize across them. If we had separate parameters for each value of the time index, we could not generalize to sequence lengths not seen during training, nor share statistical strength across different sequence lengths and across different positions in time. Such sharing is particularly important when a specific piece of information can occur at multiple positions within the sequence. For example, consider the two sentences “I went to Nepal in 2009” and “In 2009, I went to Nepal.” If we ask a machine learning model to read each sentence and extract the year in which the narrator went to Nepal, we would like it to recognize the year 2009 as the relevant piece of information, whether it appears in the sixth 373
  • 390. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS word or the second word of the sentence. Suppose that we trained a feedforward network that processes sentences of fixed length. A traditional fully connected feedforward network would have separate parameters for each input feature, so it would need to learn all of the rules of the language separately at each position in the sentence. By comparison, a recurrent neural network shares the same weights across several time steps. A related idea is the use of convolution across a 1-D temporal sequence. This convolutional approach is the basis for time-delay neural networks (Lang and Hinton 1988 Waibel 1989 Lang 1990 , ; et al., ; et al., ). The convolution operation allows a network to share parameters across time, but is shallow. The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. The idea of parameter sharing manifests in the application of the same convolution kernel at each time step. Recurrent networks share parameters in a different way. Each member of the output is a function of the previous members of the output. Each member of the output is produced using the same update rule applied to the previous outputs. This recurrent formulation results in the sharing of parameters through a very deep computational graph. For the simplicity of exposition, we refer to RNNs as operating on a sequence that contains vectors x( ) t with the time step index t ranging from to 1 τ. In practice, recurrent networks usually operate on minibatches of such sequences, with a different sequence length τ for each member of the minibatch. We have omitted the minibatch indices to simplify notation. Moreover, the time step index need not literally refer to the passage of time in the real world. Sometimes it refers only to the position in the sequence. RNNs may also be applied in two dimensions across spatial data such as images, and even when applied to data involving time, the network may have connections that go backwards in time, provided that the entire sequence is observed before it is provided to the network. This chapter extends the idea of a computational graph to include cycles. These cycles represent the influence of the present value of a variable on its own value at a future time step. Such computational graphs allow us to define recurrent neural networks. We then describe many different ways to construct, train, and use recurrent neural networks. For more information on recurrent neural networks than is available in this chapter, we refer the reader to the textbook of Graves 2012 ( ). 374
  • 391. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS 10.1 Unfolding Computational Graphs A computational graph is a way to formalize the structure of a set of computations, such as those involved in mapping inputs and parameters to outputs and loss. Please refer to section for a general introduction. In this section we explain 6.5.1 the idea of unfolding a recursive or recurrent computation into a computational graph that has a repetitive structure, typically corresponding to a chain of events. Unfolding this graph results in the sharing of parameters across a deep network structure. For example, consider the classical form of a dynamical system: s( ) t = ( f s( 1) t− ; ) θ , (10.1) where s( ) t is called the state of the system. Equation is recurrent because the definition of 10.1 s at time t refers back to the same definition at time . t − 1 For a finite number of time steps τ, the graph can be unfolded by applying the definition τ − 1 times. For example, if we unfold equation for 10.1 τ = 3 time steps, we obtain s(3) = ( f s(2) ; ) θ (10.2) = ( ( f f s(1) ; ); ) θ θ (10.3) Unfolding the equation by repeatedly applying the definition in this way has yielded an expression that does not involve recurrence. Such an expression can now be represented by a traditional directed acyclic computational graph. The unfolded computational graph of equation and equation is illustrated in 10.1 10.3 figure . 10.1 s(t−1) s(t−1) s( ) t s( ) t s( +1) t s( +1) t f f s( ) ... s( ) ... s( ) ... s( ) ... f f f f f f Figure 10.1: The classical dynamical system described by equation , illustrated as an 10.1 unfolded computational graph. Each node represents the state at some timet and the function f maps the state at t to the state at t + 1. The same parameters (the same value of used to parametrize ) are used for all time steps. θ f As another example, let us consider a dynamical system driven by an external signal x( ) t , s( ) t = ( f s( 1) t− , x( ) t ; ) θ , (10.4) 375
  • 392. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS where we see that the state now contains information about the whole past sequence. Recurrent neural networks can be built in many different ways. Much as almost any function can be considered a feedforward neural network, essentially any function involving recurrence can be considered a recurrent neural network. Many recurrent neural networks use equation or a similar equation to 10.5 define the values of their hidden units. To indicate that the state is the hidden units of the network, we now rewrite equation using the variable 10.4 h to represent the state: h( ) t = ( f h( 1) t− , x( ) t ; ) θ , (10.5) illustrated in figure , typical RNNs will add extra architectural features such 10.2 as output layers that read information out of the state to make predictions. h When the recurrent network is trained to perform a task that requires predicting the future from the past, the network typically learns to use h( ) t as a kind of lossy summary of the task-relevant aspects of the past sequence of inputs up to t. This summary is in general necessarily lossy, since it maps an arbitrary length sequence (x( ) t , x( 1) t− , x( 2) t− , . . . , x(2) , x(1) ) to a fixed length vector h( ) t . Depending on the training criterion, this summary might selectively keep some aspects of the past sequence with more precision than other aspects. For example, if the RNN is used in statistical language modeling, typically to predict the next word given previous words, it may not be necessary to store all of the information in the input sequence up to time t, but rather only enough information to predict the rest of the sentence. The most demanding situation is when we ask h( ) t to be rich enough to allow one to approximately recover the input sequence, as in autoencoder frameworks (chapter ). 14 f f h h x x h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t x(t−1) x(t−1) x( ) t x( ) t x( +1) t x( +1) t h( ) ... h( ) ... h( ) ... h( ) ... f f Unfold f f f f f Figure 10.2: A recurrent network with no outputs. This recurrent network just processes information from the input x by incorporating it into the state h that is passed forward through time. (Left)Circuit diagram. The black square indicates a delay of a single time step. The same network seen as an unfolded computational graph, where each (Right) node is now associated with one particular time instance. Equation can be drawn in two different ways. One way to draw the RNN 10.5 is with a diagram containing one node for every component that might exist in a 376
  • 393. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS physical implementation of the model, such as a biological neural network. In this view, the network defines a circuit that operates in real time, with physical parts whose current state can influence their future state, as in the left of figure . 10.2 Throughout this chapter, we use a black square in a circuit diagram to indicate that an interaction takes place with a delay of a single time step, from the state at time t to the state at time t + 1. The other way to draw the RNN is as an unfolded computational graph, in which each component is represented by many different variables, with one variable per time step, representing the state of the component at that point in time. Each variable for each time step is drawn as a separate node of the computational graph, as in the right of figure . What we 10.2 call unfolding is the operation that maps a circuit as in the left side of the figure to a computational graph with repeated pieces as in the right side. The unfolded graph now has a size that depends on the sequence length. We can represent the unfolded recurrence after steps with a function t g( ) t : h( ) t =g( ) t (x( ) t , x( 1) t− , x( 2) t− , . . . , x(2) , x(1) ) (10.6) = ( f h( 1) t− , x( ) t ; ) θ (10.7) The function g( ) t takes the whole past sequence (x( ) t , x( 1) t− , x( 2) t− , . . . , x(2), x(1)) as input and produces the current state, but the unfolded recurrent structure allows us to factorize g( ) t into repeated application of a function f. The unfolding process thus introduces two major advantages: 1. Regardless of the sequence length, the learned model always has the same input size, because it is specified in terms of transition from one state to another state, rather than specified in terms of a variable-length history of states. 2. It is possible to use the transition function same f with the same parameters at every time step. These two factors make it possible to learn a single model f that operates on all time steps and all sequence lengths, rather than needing to learn a separate model g( ) t for all possible time steps. Learning a single, shared model allows generalization to sequence lengths that did not appear in the training set, and allows the model to be estimated with far fewer training examples than would be required without parameter sharing. Both the recurrent graph and the unrolled graph have their uses. The recurrent graph is succinct. The unfolded graph provides an explicit description of which computations to perform. The unfolded graph also helps to illustrate the idea of 377
  • 394. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS information flow forward in time (computing outputs and losses) and backward in time (computing gradients) by explicitly showing the path along which this information flows. 10.2 Recurrent Neural Networks Armed with the graph unrolling and parameter sharing ideas of section , we 10.1 can design a wide variety of recurrent neural networks. U U V V W W o(t−1) o(t−1) h h o o y y L L x x o( ) t o( ) t o( +1) t o( +1) t L(t−1) L(t−1) L( ) t L( ) t L( +1) t L( +1) t y(t−1) y(t−1) y( ) t y( ) t y( +1) t y( +1) t h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t x(t−1) x(t−1) x( ) t x( ) t x( +1) t x( +1) t W W W W W W W W h( ) ... h( ) ... h( ) ... h( ) ... V V V V V V U U U U U U Unfold Figure 10.3: The computational graph to compute the training loss of a recurrent network that maps an input sequence of x values to a corresponding sequence of output o values. A loss L measures how far each o is from the corresponding training targety . When using softmax outputs, we assume o is the unnormalized log probabilities. The lossL internally computes ŷ = softmax(o) and compares this to the target y. The RNN has input to hidden connections parametrized by a weight matrix U, hidden-to-hidden recurrent connections parametrized by a weight matrix W , and hidden-to-output connections parametrized by a weight matrix V . Equation defines forward propagation in this model. 10.8 (Left)The RNN and its loss drawn with recurrent connections. (Right)The same seen as an time- unfolded computational graph, where each node is now associated with one particular time instance. Some examples of important design patterns for recurrent neural networks include the following: 378
  • 395. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS • Recurrent networks that produce an output at each time step and have recurrent connections between hidden units, illustrated in figure . 10.3 • Recurrent networks that produce an output at each time step and have recurrent connections only from the output at one time step to the hidden units at the next time step, illustrated in figure 10.4 • Recurrent networks with recurrent connections between hidden units, that read an entire sequence and then produce a single output, illustrated in figure . 10.5 figure is a reasonably representative example that we return to throughout 10.3 most of the chapter. The recurrent neural network of figure and equation is universal in the 10.3 10.8 sense that any function computable by a Turing machine can be computed by such a recurrent network of a finite size. The output can be read from the RNN after a number of time steps that is asymptotically linear in the number of time steps used by the Turing machine and asymptotically linear in the length of the input (Siegelmann and Sontag 1991 Siegelmann 1995 Siegelmann and Sontag 1995 , ; , ; , ; Hyotyniemi 1996 , ). The functions computable by a Turing machine are discrete, so these results regard exact implementation of the function, not approximations. The RNN, when used as a Turing machine, takes a binary sequence as input and its outputs must be discretized to provide a binary output. It is possible to compute all functions in this setting using a single specific RNN of finite size (Siegelmann and Sontag 1995 ( ) use 886 units). The “input” of the Turing machine is a specification of the function to be computed, so the same network that simulates this Turing machine is sufficient for all problems. The theoretical RNN used for the proof can simulate an unbounded stack by representing its activations and weights with rational numbers of unbounded precision. We now develop the forward propagation equations for the RNN depicted in figure . The figure does not specify the choice of activation function for the 10.3 hidden units. Here we assume the hyperbolic tangent activation function. Also, the figure does not specify exactly what form the output and loss function take. Here we assume that the output is discrete, as if the RNN is used to predict words or characters. A natural way to represent discrete variables is to regard the output o as giving the unnormalized log probabilities of each possible value of the discrete variable. We can then apply the softmax operation as a post-processing step to obtain a vector ŷ of normalized probabilities over the output. Forward propagation begins with a specification of the initial state h(0). Then, for each time step from 379
  • 396. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS U V W o(t−1) o(t−1) h h o o y y L L x x o( ) t o( ) t o( +1) t o( +1) t L(t−1) L(t−1) L( ) t L( ) t L( +1) t L( +1) t y(t−1) y(t−1) y( ) t y( ) t y( +1) t y( +1) t h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t x(t−1) x(t−1) x( ) t x( ) t x( +1) t x( +1) t W W W W o( ) ... o( ) ... h( ) ... h( ) ... V V V U U U Unfold Figure 10.4: An RNN whose only recurrence is the feedback connection from the output to the hidden layer. At each time step t, the input is xt, the hidden layer activations are h( ) t , the outputs are o( ) t , the targets are y( ) t and the loss is L( ) t . (Left)Circuit diagram. (Right)Unfolded computational graph. Such an RNN is less powerful (can express a smaller set of functions) than those in the family represented by figure . The RNN 10.3 in figure can choose to put any information it wants about the past into its hidden 10.3 representation h and transmit h to the future. The RNN in this figure is trained to put a specific output value into o, and o is the only information it is allowed to send to the future. There are no direct connections from h going forward. The previous h is connected to the present only indirectly, via the predictions it was used to produce. Unless o is very high-dimensional and rich, it will usually lack important information from the past. This makes the RNN in this figure less powerful, but it may be easier to train because each time step can be trained in isolation from the others, allowing greater parallelization during training, as described in section . 10.2.1 380
  • 397. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS t t τ = 1 to = , we apply the following update equations: a( ) t = + b W h( 1) t− + Ux( ) t (10.8) h( ) t = tanh(a( ) t ) (10.9) o( ) t = + c V h( ) t (10.10) ŷ( ) t = softmax(o( ) t ) (10.11) where the parameters are the bias vectors b and c along with the weight matrices U, V and W , respectively for input-to-hidden, hidden-to-output and hidden-to- hidden connections. This is an example of a recurrent network that maps an input sequence to an output sequence of the same length. The total loss for a given sequence of values paired with a sequence of values would then be just x y the sum of the losses over all the time steps. For example, if L( ) t is the negative log-likelihood of y( ) t given x(1), . . . , x( ) t , then L  {x(1) , . . . , x( ) τ } { , y(1) , . . . , y( ) τ }  (10.12) =  t L( ) t (10.13) = −  t log pmodel  y( ) t | {x(1) , . . . , x( ) t }  , (10.14) where pmodel  y( ) t | {x(1) , . . . , x( ) t }  is given by reading the entry for y( ) t from the model’s output vectorŷ( ) t . Computing the gradient of this loss function with respect to the parameters is an expensive operation. The gradient computation involves performing a forward propagation pass moving left to right through our illustration of the unrolled graph in figure , followed by a backward propagation pass 10.3 moving right to left through the graph. The runtime is O(τ ) and cannot be reduced by parallelization because the forward propagation graph is inherently sequential; each time step may only be computed after the previous one. States computed in the forward pass must be stored until they are reused during the backward pass, so the memory cost is also O(τ). The back-propagation algorithm applied to the unrolled graph with O(τ) cost is called back-propagation through time or BPTT and is discussed further in section . The network with recurrence 10.2.2 between hidden units is thus very powerful but also expensive to train. Is there an alternative? 10.2.1 Teacher Forcing and Networks with Output Recurrence The network with recurrent connections only from the output at one time step to the hidden units at the next time step (shown in figure ) is strictly less powerful 10.4 381
  • 398. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS because it lacks hidden-to-hidden recurrent connections. For example, it cannot simulate a universal Turing machine. Because this network lacks hidden-to-hidden recurrence, it requires that the output units capture all of the information about the past that the network will use to predict the future. Because the output units are explicitly trained to match the training set targets, they are unlikely to capture the necessary information about the past history of the input, unless the user knows how to describe the full state of the system and provides it as part of the training set targets. The advantage of eliminating hidden-to-hidden recurrence is that, for any loss function based on comparing the prediction at time t to the training target at time t, all the time steps are decoupled. Training can thus be parallelized, with the gradient for each step t computed in isolation. There is no need to compute the output for the previous time step first, because the training set provides the ideal value of that output. h(t−1) h(t−1) W h( ) t h( ) t . . . . . . x(t−1) x(t−1) x( ) t x( ) t x( ) ... x( ) ... W W U U U h( ) τ h( ) τ x( ) τ x( ) τ W U o( ) τ o( ) τ y( ) τ y( ) τ L( ) τ L( ) τ V . . . . . . Figure 10.5: Time-unfolded recurrent neural network with a single output at the end of the sequence. Such a network can be used to summarize a sequence and produce a fixed-size representation used as input for further processing. There might be a target right at the end (as depicted here) or the gradient on the output o( ) t can be obtained by back-propagating from further downstream modules. Models that have recurrent connections from their outputs leading back into the model may be trained with teacher forcing. Teacher forcing is a procedure that emerges from the maximum likelihood criterion, in which during training the model receives the ground truth output y( ) t as input at time t + 1. We can see this by examining a sequence with two time steps. The conditional maximum 382
  • 399. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS o(t−1) o(t−1) o( ) t o( ) t h(t−1) h(t−1) h( ) t h( ) t x(t−1) x(t−1) x( ) t x( ) t W V V U U o(t−1) o(t−1) o( ) t o( ) t L(t−1) L(t−1) L( ) t L( ) t y(t−1) y(t−1) y( ) t y( ) t h(t−1) h(t−1) h( ) t h( ) t x(t−1) x(t−1) x( ) t x( ) t W V V U U Train time Test time Figure 10.6: Illustration of teacher forcing. Teacher forcing is a training technique that is applicable to RNNs that have connections from their output to their hidden states at the next time step. (Left)At train time, we feed the correct outputy ( ) t drawn from the train set as input to h( +1) t . When the model is deployed, the true output is generally (Right) not known. In this case, we approximate the correct output y( ) t with the model’s output o( ) t , and feed the output back into the model. 383
  • 400. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS likelihood criterion is log p  y(1) , y(2) | x(1) , x(2)  (10.15) = log p  y(2) | y(1) , x(1) , x(2)  + log p  y(1) | x(1) , x(2)  (10.16) In this example, we see that at time t = 2, the model is trained to maximize the conditional probability of y(2) given both the x sequence so far and the previous y value from the training set. Maximum likelihood thus specifies that during training, rather than feeding the model’s own output back into itself, these connections should be fed with the target values specifying what the correct output should be. This is illustrated in figure . 10.6 We originally motivated teacher forcing as allowing us to avoid back-propagation through time in models that lack hidden-to-hidden connections. Teacher forcing may still be applied to models that have hidden-to-hidden connections so long as they have connections from the output at one time step to values computed in the next time step. However, as soon as the hidden units become a function of earlier time steps, the BPTT algorithm is necessary. Some models may thus be trained with both teacher forcing and BPTT. The disadvantage of strict teacher forcing arises if the network is going to be later used in an open-loop mode, with the network outputs (or samples from the output distribution) fed back as input. In this case, the kind of inputs that the network sees during training could be quite different from the kind of inputs that it will see at test time. One way to mitigate this problem is to train with both teacher-forced inputs and with free-running inputs, for example by predicting the correct target a number of steps in the future through the unfolded recurrent output-to-input paths. In this way, the network can learn to take into account input conditions (such as those it generates itself in the free-running mode) not seen during training and how to map the state back towards one that will make the network generate proper outputs after a few steps. Another approach (Bengio et al., ) to mitigate the gap between the inputs seen at train time and the 2015b inputs seen at test time randomly chooses to use generated values or actual data values as input. This approach exploits a curriculum learning strategy to gradually use more of the generated values as input. 10.2.2 Computing the Gradient in a Recurrent Neural Network Computing the gradient through a recurrent neural network is straightforward. One simply applies the generalized back-propagation algorithm of section 6.5.6 384
  • 401. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS to the unrolled computational graph. No specialized algorithms are necessary. Gradients obtained by back-propagation may then be used with any general-purpose gradient-based techniques to train an RNN. To gain some intuition for how the BPTT algorithm behaves, we provide an example of how to compute gradients by BPTT for the RNN equations above (equation and equation ). The nodes of our computational graph include 10.8 10.12 the parameters U , V , W , b and c as well as the sequence of nodes indexed by t for x( ) t , h( ) t , o( ) t and L( ) t . For each node N we need to compute the gradient ∇NL recursively, based on the gradient computed at nodes that follow it in the graph. We start the recursion with the nodes immediately preceding the final loss ∂L ∂L( ) t = 1. (10.17) In this derivation we assume that the outputs o( ) t are used as the argument to the softmax function to obtain the vector ŷ of probabilities over the output. We also assume that the loss is the negative log-likelihood of the true target y( ) t given the input so far. The gradient ∇o( ) t L on the outputs at time step t, for all i, t, is as follows: (∇o( ) t L)i = ∂L ∂o ( ) t i = ∂L ∂L( ) t ∂L( ) t ∂o ( ) t i = ŷ ( ) t i − 1i,y( ) t . (10.18) We work our way backwards, starting from the end of the sequence. At the final time step , τ h( ) τ only has o( ) τ as a descendent, so its gradient is simple: ∇h( ) τ L = V  ∇o( ) τ L. (10.19) We can then iterate backwards in time to back-propagate gradients through time, from t = τ − 1 down to t = 1, noting that h( ) t (for t < τ) has as descendents both o( ) t and h( +1) t . Its gradient is thus given by ∇h( ) t L =  ∂h( +1) t ∂h( ) t  (∇h( +1) t L) +  ∂o( ) t ∂h( ) t  (∇o( ) t L) (10.20) = W (∇h( +1) t L) diag  1 −  h( +1) t 2  + V  (∇o( ) t L) (10.21) where diag  1 −  h( +1) t 2  indicates the diagonal matrix containing the elements 1 − (h ( +1) t i )2 . This is the Jacobian of the hyperbolic tangent associated with the hidden unit at time . i t + 1 385
  • 402. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS Once the gradients on the internal nodes of the computational graph are obtained, we can obtain the gradients on the parameter nodes. Because the parameters are shared across many time steps, we must take some care when denoting calculus operations involving these variables. The equations we wish to implement use the bprop method of section , that computes the contribution 6.5.6 of a single edge in the computational graph to the gradient. However, the ∇W f operator used in calculus takes into account the contribution of W to the value of f due to edges in the computational graph. To resolve this ambiguity, we all introduce dummy variables W( ) t that are defined to be copies of W but with each W( ) t used only at time step t. We may then use ∇W( ) t to denote the contribution of the weights at time step to the gradient. t Using this notation, the gradient on the remaining parameters is given by: ∇cL =  t  ∂o( ) t ∂c  ∇o( ) t L =  t ∇o( ) t L (10.22) ∇bL =  t  ∂h( ) t ∂b( ) t  ∇h( ) t L =  t diag  1 −  h( ) t 2  ∇h( ) t L(10.23) ∇V L =  t  i  ∂L ∂o ( ) t i  ∇V o ( ) t i =  t (∇o( ) t L) h( ) t  (10.24) ∇WL =  t  i  ∂L ∂h ( ) t i  ∇W ( ) t h ( ) t i (10.25) =  t diag  1 −  h( ) t 2  (∇h( ) t L) h( 1) t−  (10.26) ∇UL =  t  i  ∂L ∂h ( ) t i  ∇U( ) t h ( ) t i (10.27) =  t diag  1 −  h( ) t 2  (∇h( ) t L) x( ) t  (10.28) We do not need to compute the gradient with respect to x( ) t for training because it does not have any parameters as ancestors in the computational graph defining the loss. 386
  • 403. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS 10.2.3 Recurrent Networks as Directed Graphical Models In the example recurrent network we have developed so far, the losses L( ) t were cross-entropies between training targetsy( ) t and outputs o( ) t . As with a feedforward network, it is in principle possible to use almost any loss with a recurrent network. The loss should be chosen based on the task. As with a feedforward network, we usually wish to interpret the output of the RNN as a probability distribution, and we usually use the cross-entropy associated with that distribution to define the loss. Mean squared error is the cross-entropy loss associated with an output distribution that is a unit Gaussian, for example, just as with a feedforward network. When we use a predictive log-likelihood training objective, such as equa- tion , we train the RNN to estimate the conditional distribution of the next 10.12 sequence element y( ) t given the past inputs. This may mean that we maximize the log-likelihood log ( p y( ) t | x(1) , . . . , x( ) t ), (10.29) or, if the model includes connections from the output at one time step to the next time step, log ( p y( ) t | x(1) , . . . , x( ) t , y(1) , . . . , y( 1) t− ). (10.30) Decomposing the joint probability over the sequence of y values as a series of one-step probabilistic predictions is one way to capture the full joint distribution across the whole sequence. When we do not feed past y values as inputs that condition the next step prediction, the directed graphical model contains no edges from any y( ) i in the past to the current y( ) t . In this case, the outputs y are conditionally independent given the sequence of x values. When we do feed the actual y values (not their prediction, but the actual observed or generated values) back into the network, the directed graphical model contains edges from all y( ) i values in the past to the current y( ) t value. As a simple example, let us consider the case where the RNN models only a sequence of scalar random variables Y = {y(1), . . . , y( ) τ }, with no additional inputs x. The input at time step t is simply the output at time step t −1. The RNN then defines a directed graphical model over the y variables. We parametrize the joint distribution of these observations using the chain rule (equation ) for conditional 3.6 probabilities: P P ( ) = Y (y(1) , . . . , y( ) τ ) = τ  t=1 P(y( ) t | y( 1) t− , y( 2) t− , . . . , y(1) ) (10.31) where the right-hand side of the bar is empty for t = 1, of course. Hence the negative log-likelihood of a set of values {y(1) , . . . , y( ) τ } according to such a model 387
  • 404. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS y (1) y (1) y (2) y (2) y(3) y(3) y(4) y(4) y(5) y(5) y( ) ... y( ) ... Figure 10.7: Fully connected graphical model for a sequencey(1) ,y(2) ,. .. , y( ) t ,. ..: every past observation y( ) i may influence the conditional distribution of some y( ) t (for t > i), given the previous values. Parametrizing the graphical model directly according to this graph (as in equation ) might be very inefficient, with an ever growing number of 10.6 inputs and parameters for each element of the sequence. RNNs obtain the same full connectivity but efficient parametrization, as illustrated in figure . 10.8 is L =  t L( ) t (10.32) where L( ) t = log ( − P y( ) t = y( ) t | y( 1) t− , y( 2) t− , . . . , y(1) ). (10.33) y (1) y (1) y(2) y(2) y(3) y(3) y(4) y(4) y(5) y(5) y( ) ... y( ) ... h(1) h(1) h(2) h(2) h(3) h(3) h(4) h(4) h(5) h(5) h( ) ... h( ) ... Figure 10.8: Introducing the state variable in the graphical model of the RNN, even though it is a deterministic function of its inputs, helps to see how we can obtain a very efficient parametrization, based on equation . Every stage in the sequence (for 10.5 h( ) t and y( ) t ) involves the same structure (the same number of inputs for each node) and can share the same parameters with the other stages. The edges in a graphical model indicate which variables depend directly on other variables. Many graphical models aim to achieve statistical and computational efficiency by omitting edges that do not correspond to strong interactions. For 388
  • 405. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS example, it is common to make the Markov assumption that the graphical model should only contain edges from {y( ) t k − , . . . , y( 1) t− } to y( ) t , rather than containing edges from the entire past history. However, in some cases, we believe that all past inputs should have an influence on the next element of the sequence. RNNs are useful when we believe that the distribution over y( ) t may depend on a value of y( ) i from the distant past in a way that is not captured by the effect of y( ) i on y( 1) t− . One way to interpret an RNN as a graphical model is to view the RNN as defining a graphical model whose structure is the complete graph, able to represent direct dependencies between any pair of y values. The graphical model over the y values with the complete graph structure is shown in figure . The complete 10.7 graph interpretation of the RNN is based on ignoring the hidden units h( ) t by marginalizing them out of the model. It is more interesting to consider the graphical model structure of RNNs that results from regarding the hidden units h( ) t as random variables.1 Including the hidden units in the graphical model reveals that the RNN provides a very efficient parametrization of the joint distribution over the observations. Suppose that we represented an arbitrary joint distribution over discrete values with a tabular representation—an array containing a separate entry for each possible assignment of values, with the value of that entry giving the probability of that assignment occurring. If y can take on k different values, the tabular representation would have O(kτ) parameters. By comparison, due to parameter sharing, the number of parameters in the RNN is O(1) as a function of sequence length. The number of parameters in the RNN may be adjusted to control model capacity but is not forced to scale with sequence length. Equation shows that the RNN parametrizes 10.5 long-term relationships between variables efficiently, using recurrent applications of the same function f and same parameters θ at each time step. Figure 10.8 illustrates the graphical model interpretation. Incorporating the h( ) t nodes in the graphical model decouples the past and the future, acting as an intermediate quantity between them. A variable y( ) i in the distant past may influence a variable y( ) t via its effect on h. The structure of this graph shows that the model can be efficiently parametrized by using the same conditional probability distributions at each time step, and that when the variables are all observed, the probability of the joint assignment of all variables can be evaluated efficiently. Even with the efficient parametrization of the graphical model, some operations remain computationally challenging. For example, it is difficult to predict missing 1 The conditional distribution over these variables given their parents is deterministic. This is perfectly legitimate, though it is somewhat rare to design a graphical model with such deterministic hidden units. 389
  • 406. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS values in the middle of the sequence. The price recurrent networks pay for their reduced number of parameters is that the parameters may be difficult. optimizing The parameter sharing used in recurrent networks relies on the assumption that the same parameters can be used for different time steps. Equivalently, the assumption is that the conditional probability distribution over the variables at time t+1 given the variables at time t is stationary, meaning that the relationship between the previous time step and the next time step does not depend on t. In principle, it would be possible to use t as an extra input at each time step and let the learner discover any time-dependence while sharing as much as it can between different time steps. This would already be much better than using a different conditional probability distribution for each t, but the network would then have to extrapolate when faced with new values of . t To complete our view of an RNN as a graphical model, we must describe how to draw samples from the model. The main operation that we need to perform is simply to sample from the conditional distribution at each time step. However, there is one additional complication. The RNN must have some mechanism for determining the length of the sequence. This can be achieved in various ways. In the case when the output is a symbol taken from a vocabulary, one can add a special symbol corresponding to the end of a sequence (Schmidhuber 2012 , ). When that symbol is generated, the sampling process stops. In the training set, we insert this symbol as an extra member of the sequence, immediately after x( ) τ in each training example. Another option is to introduce an extra Bernoulli output to the model that represents the decision to either continue generation or halt generation at each time step. This approach is more general than the approach of adding an extra symbol to the vocabulary, because it may be applied to any RNN, rather than only RNNs that output a sequence of symbols. For example, it may be applied to an RNN that emits a sequence of real numbers. The new output unit is usually a sigmoid unit trained with the cross-entropy loss. In this approach the sigmoid is trained to maximize the log-probability of the correct prediction as to whether the sequence ends or continues at each time step. Another way to determine the sequence length τ is to add an extra output to the model that predicts the integer τ itself. The model can sample a value of τ and then sample τ steps worth of data. This approach requires adding an extra input to the recurrent update at each time step so that the recurrent update is aware of whether it is near the end of the generated sequence. This extra input can either consist of the value of τ or can consist of τ t − , the number of remaining 390
  • 407. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS time steps. Without this extra input, the RNN might generate sequences that end abruptly, such as a sentence that ends before it is complete. This approach is based on the decomposition P(x(1) , . . . , x( ) τ ) = ( ) ( P τ P x(1) , . . . , x( ) τ | τ . ) (10.34) The strategy of predicting τ directly is used for example by Goodfellow et al. ( ). 2014d 10.2.4 Modeling Sequences Conditioned on Context with RNNs In the previous section we described how an RNN could correspond to a directed graphical model over a sequence of random variables y( ) t with no inputs x. Of course, our development of RNNs as in equation included a sequence of 10.8 inputs x(1) , x(2) , . . . , x( ) τ . In general, RNNs allow the extension of the graphical model view to represent not only a joint distribution over the y variables but also a conditional distribution over y given x. As discussed in the context of feedforward networks in section , any model representing a variable 6.2.1.1 P(y; θ) can be reinterpreted as a model representing a conditional distribution P(y ω | ) with ω = θ. We can extend such a model to represent a distribution P (y x | ) by using the same P(y ω | ) as before, but making ω a function of x. In the case of an RNN, this can be achieved in different ways. We review here the most common and obvious choices. Previously, we have discussed RNNs that take a sequence of vectors x( ) t for t = 1, . . . , τ as input. Another option is to take only a single vector x as input. When x is a fixed-size vector, we can simply make it an extra input of the RNN that generates the y sequence. Some common ways of providing an extra input to an RNN are: 1. as an extra input at each time step, or 2. as the initial state h(0) , or 3. both. The first and most common approach is illustrated in figure . The interaction 10.9 between the input x and each hidden unit vector h( ) t is parametrized by a newly introduced weight matrix R that was absent from the model of only the sequence of y values. The same product x R is added as additional input to the hidden units at every time step. We can think of the choice of x as determining the value 391
  • 408. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS of x R that is effectively a new bias parameter used for each of the hidden units. The weights remain independent of the input. We can think of this model as taking the parameters θ of the non-conditional model and turning them into ω, where the bias parameters within are now a function of the input. ω o(t−1) o(t−1) o( ) t o( ) t o( +1) t o( +1) t L(t−1) L(t−1) L( ) t L( ) t L( +1) t L( +1) t y(t−1) y(t−1) y( ) t y( ) t y( +1) t y( +1) t h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t W W W W s( ) ... s( ) ... h( ) ... h( ) ... V V V U U U x x y( ) ... y( ) ... R R R R R Figure 10.9: An RNN that maps a fixed-length vectorx into a distribution over sequences Y. This RNN is appropriate for tasks such as image captioning, where a single image is used as input to a model that then produces a sequence of words describing the image. Each element y( ) t of the observed output sequence serves both as input (for the current time step) and, during training, as target (for the previous time step). Rather than receiving only a single vector x as input, the RNN may receive a sequence of vectors x( ) t as input. The RNN described in equation corre- 10.8 sponds to a conditional distribution P(y(1), . . . , y( ) τ | x(1), . . . , x( ) τ ) that makes a conditional independence assumption that this distribution factorizes as  t P(y( ) t | x(1) , . . . , x( ) t ). (10.35) To remove the conditional independence assumption, we can add connections from the output at time t to the hidden unit at time t+ 1, as shown in figure . The 10.10 model can then represent arbitrary probability distributions over the y sequence. This kind of model representing a distribution over a sequence given another 392
  • 409. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS o(t−1) o(t−1) o( ) t o( ) t o( +1) t o( +1) t L(t−1) L(t−1) L( ) t L( ) t L( +1) t L( +1) t y(t−1) y(t−1) y( ) t y( ) t y( +1) t y( +1) t h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t W W W W h( ) ... h( ) ... h( ) ... h( ) ... V V V U U U x(t−1) x(t−1) R x( ) t x( ) t x( +1) t x( +1) t R R Figure 10.10: A conditional recurrent neural network mapping a variable-length sequence of x values into a distribution over sequences of y values of the same length. Compared to figure , this RNN contains connections from the previous output to the current state. 10.3 These connections allow this RNN to model an arbitrary distribution over sequences ofy given sequences of x of the same length. The RNN of figure is only able to represent 10.3 distributions in which the y values are conditionally independent from each other given the values. x 393
  • 410. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS sequence still has one restriction, which is that the length of both sequences must be the same. We describe how to remove this restriction in section . 10.4 o(t−1) o(t−1) o( ) t o( ) t o( +1) t o( +1) t L(t−1) L(t−1) L( ) t L( ) t L( +1) t L( +1) t y(t−1) y(t−1) y( ) t y( ) t y ( +1) t y ( +1) t h(t−1) h(t−1) h( ) t h( ) t h( +1) t h( +1) t x(t−1) x(t−1) x( ) t x( ) t x( +1) t x( +1) t g(t−1) g(t−1) g( ) t g( ) t g( +1) t g( +1) t Figure 10.11: Computation of a typical bidirectional recurrent neural network, meant to learn to map input sequences x to target sequences y, with loss L( ) t at each step t. The h recurrence propagates information forward in time (towards the right) while the g recurrence propagates information backward in time (towards the left). Thus at each point t, the output units o( ) t can benefit from a relevant summary of the past in itsh( ) t input and from a relevant summary of the future in its g( ) t input. 10.3 Bidirectional RNNs All of the recurrent networks we have considered up to now have a “causal” struc- ture, meaning that the state at time t only captures information from the past, x(1), . . . , x( 1) t− , and the present input x( ) t . Some of the models we have discussed also allow information from past y values to affect the current state when the y values are available. However, in many applications we want to output a prediction of y( ) t which may 394
  • 411. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS depend on the whole input sequence. For example, in speech recognition, the correct interpretation of the current sound as a phoneme may depend on the next few phonemes because of co-articulation and potentially may even depend on the next few words because of the linguistic dependencies between nearby words: if there are two interpretations of the current word that are both acoustically plausible, we may have to look far into the future (and the past) to disambiguate them. This is also true of handwriting recognition and many other sequence-to-sequence learning tasks, described in the next section. Bidirectional recurrent neural networks (or bidirectional RNNs) were invented to address that need (Schuster and Paliwal 1997 , ). They have been extremely suc- cessful (Graves 2012 , ) in applications where that need arises, such as handwriting recognition (Graves 2008 Graves and Schmidhuber 2009 et al., ; , ), speech recogni- tion (Graves and Schmidhuber 2005 Graves 2013 Baldi , ; et al., ) and bioinformatics ( et al., ). 1999 As the name suggests, bidirectional RNNs combine an RNN that moves forward through time beginning from the start of the sequence with another RNN that moves backward through time beginning from the end of the sequence. Figure 10.11 illustrates the typical bidirectional RNN, with h( ) t standing for the state of the sub-RNN that moves forward through time and g( ) t standing for the state of the sub-RNN that moves backward through time. This allows the output units o( ) t to compute a representation that depends on both the past and the future but is most sensitive to the input values around time t, without having to specify a fixed-size window around t (as one would have to do with a feedforward network, a convolutional network, or a regular RNN with a fixed-size look-ahead buffer). This idea can be naturally extended to 2-dimensional input, such as images, by having RNNs, each one going in one of the four directions: up, down, left, four right. At each point (i, j) of a 2-D grid, an output Oi,j could then compute a representation that would capture mostly local information but could also depend on long-range inputs, if the RNN is able to learn to carry that information. Compared to a convolutional network, RNNs applied to images are typically more expensive but allow for long-range lateral interactions between features in the same feature map ( , ; Visin et al. 2015 Kalchbrenner 2015 et al., ). Indeed, the forward propagation equations for such RNNs may be written in a form that shows they use a convolution that computes the bottom-up input to each layer, prior to the recurrent propagation across the feature map that incorporates the lateral interactions. 395
  • 412. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS 10.4 Encoder-Decoder Sequence-to-Sequence Architec- tures We have seen in figure how an RNN can map an input sequence to a fixed-size 10.5 vector. We have seen in figure how an RNN can map a fixed-size vector to a 10.9 sequence. We have seen in figures , , and how an RNN can 10.3 10.4 10.10 10.11 map an input sequence to an output sequence of the same length. Encoder … x(1) x(1) x(2) x(2) x( ) ... x( ) ... x(nx ) x(nx ) Decoder … y(1) y(1) y(2) y(2) y( ) ... y( ) ... y(ny ) y(ny ) C C Figure 10.12: Example of an encoder-decoder or sequence-to-sequence RNN architecture, for learning to generate an output sequence ( y(1) ,. .. , y(ny) ) given an input sequence (x(1) ,x(2) ,. .. , x(nx ) ). It is composed of an encoder RNN that reads the input sequence and a decoder RNN that generates the output sequence (or computes the probability of a given output sequence). The final hidden state of the encoder RNN is used to compute a generally fixed-size context variableC which represents a semantic summary of the input sequence and is given as input to the decoder RNN. Here we discuss how an RNN can be trained to map an input sequence to an output sequence which is not necessarily of the same length. This comes up in many applications, such as speech recognition, machine translation or question 396
  • 413. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS answering, where the input and output sequences in the training set are generally not of the same length (although their lengths might be related). We often call the input to the RNN the “context.” We want to produce a representation of this context, C . The context C might be a vector or sequence of vectors that summarize the input sequence X x = ( (1), . . . , x(nx )). The simplest RNN architecture for mapping a variable-length sequence to another variable-length sequence was first proposed by ( ) and Cho et al. 2014a shortly after by Sutskever 2014 et al. ( ), who independently developed that archi- tecture and were the first to obtain state-of-the-art translation using this approach. The former system is based on scoring proposals generated by another machine translation system, while the latter uses a standalone recurrent network to generate the translations. These authors respectively called this architecture, illustrated in figure , the encoder-decoder or sequence-to-sequence architecture. The 10.12 idea is very simple: (1) an encoder or reader or input RNN processes the input sequence. The encoder emits the context C, usually as a simple function of its final hidden state. (2) a decoder or writer or output RNN is conditioned on that fixed-length vector (just like in figure ) to generate the output sequence 10.9 Y = (y(1) , . . . , y(ny ) ). The innovation of this kind of architecture over those presented in earlier sections of this chapter is that the lengths nx and ny can vary from each other, while previous architectures constrained nx = ny = τ. In a sequence-to-sequence architecture, the two RNNs are trained jointly to maximize the average of log P(y(1), . . . , y(ny) | x(1), . . . , x(nx)) over all the pairs of x and y sequences in the training set. The last state hnx of the encoder RNN is typically used as a representation C of the input sequence that is provided as input to the decoder RNN. If the context C is a vector, then the decoder RNN is simply a vector-to- sequence RNN as described in section . As we have seen, there are at least 10.2.4 two ways for a vector-to-sequence RNN to receive input. The input can be provided as the initial state of the RNN, or the input can be connected to the hidden units at each time step. These two ways can also be combined. There is no constraint that the encoder must have the same size of hidden layer as the decoder. One clear limitation of this architecture is when the context C output by the encoder RNN has a dimension that is too small to properly summarize a long sequence. This phenomenon was observed by ( ) in the context Bahdanau et al. 2015 of machine translation. They proposed to make C a variable-length sequence rather than a fixed-size vector. Additionally, they introduced an attention mechanism that learns to associate elements of the sequence C to elements of the output 397
  • 414. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS sequence. See section for more details. 12.4.5.1 10.5 Deep Recurrent Networks The computation in most RNNs can be decomposed into three blocks of parameters and associated transformations: 1. from the input to the hidden state, 2. from the previous hidden state to the next hidden state, and 3. from the hidden state to the output. With the RNN architecture of figure , each of these three blocks is associated 10.3 with a single weight matrix. In other words, when the network is unfolded, each of these corresponds to a shallow transformation. By a shallow transformation, we mean a transformation that would be represented by a single layer within a deep MLP. Typically this is a transformation represented by a learned affine transformation followed by a fixed nonlinearity. Would it be advantageous to introduce depth in each of these operations? Experimental evidence (Graves 2013 Pascanu 2014a et al., ; et al., ) strongly suggests so. The experimental evidence is in agreement with the idea that we need enough depth in order to perform the required mappings. See also Schmidhuber 1992 ( ), El Hihi and Bengio 1996 Jaeger 2007a ( ), or ( ) for earlier work on deep RNNs. Graves 2013 et al. ( ) were the first to show a significant benefit of decomposing the state of an RNN into multiple layers as in figure (left). We can think 10.13 of the lower layers in the hierarchy depicted in figure a as playing a role 10.13 in transforming the raw input into a representation that is more appropriate, at the higher levels of the hidden state. Pascanu 2014a et al. ( ) go a step further and propose to have a separate MLP (possibly deep) for each of the three blocks enumerated above, as illustrated in figure b. Considerations of representational 10.13 capacity suggest to allocate enough capacity in each of these three steps, but doing so by adding depth may hurt learning by making optimization difficult. In general, it is easier to optimize shallower architectures, and adding the extra depth of figure b makes the shortest path from a variable in time step 10.13 t to a variable in time step t + 1 become longer. For example, if an MLP with a single hidden layer is used for the state-to-state transition, we have doubled the length of the shortest path between variables in any two different time steps, compared with the ordinary RNN of figure . However, as argued by 10.3 Pascanu 2014a et al. ( ), this 398
  • 415. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS h y x z (a) (b) (c) x h y x h y Figure 10.13: A recurrent neural network can be made deep in many ways (Pascanu et al., ). The hidden recurrent state can be broken down into groups organized 2014a (a) hierarchically. Deeper computation (e.g., an MLP) can be introduced in the input-to- (b) hidden, hidden-to-hidden and hidden-to-output parts. This may lengthen the shortest path linking different time steps. The path-lengthening effect can be mitigated by (c) introducing skip connections. 399
  • 416. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS can be mitigated by introducing skip connections in the hidden-to-hidden path, as illustrated in figure c. 10.13 10.6 Recursive Neural Networks x(1) x(1) x(2) x(2) x(3) x(3) V V V y y L L x(4) x(4) V o o U W U W U W Figure 10.14: A recursive network has a computational graph that generalizes that of the recurrent network from a chain to a tree. A variable-size sequencex(1),x(2),. .. , x( ) t can be mapped to a fixed-size representation (the outputo), with a fixed set of parameters (the weight matrices U, V , W ). The figure illustrates a supervised learning case in which some target is provided which is associated with the whole sequence. y Recursive neural networks2 represent yet another generalization of recurrent networks, with a different kind of computational graph, which is structured as a deep tree, rather than the chain-like structure of RNNs. The typical computational graph for a recursive network is illustrated in figure . Recursive neural 10.14 2 We suggest to not abbreviate “recursive neural network” as “RNN” to avoid confusion with “recurrent neural network.” 400
  • 417. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS networks were introduced by Pollack 1990 ( ) and their potential use for learning to reason was described by ( ). Recursive networks have been successfully Bottou 2011 applied to processing data structures as input to neural nets (Frasconi 1997 et al., , 1998 Socher 2011a c 2013a ), in natural language processing ( et al., , , ) as well as in computer vision ( , ). Socher et al. 2011b One clear advantage of recursive nets over recurrent nets is that for a sequence of the same length τ, the depth (measured as the number of compositions of nonlinear operations) can be drastically reduced from τ to O(log τ ), which might help deal with long-term dependencies. An open question is how to best structure the tree. One option is to have a tree structure which does not depend on the data, such as a balanced binary tree. In some application domains, external methods can suggest the appropriate tree structure. For example, when processing natural language sentences, the tree structure for the recursive network can be fixed to the structure of the parse tree of the sentence provided by a natural language parser ( , , ). Ideally, one would like the learner itself to Socher et al. 2011a 2013a discover and infer the tree structure that is appropriate for any given input, as suggested by ( ). Bottou 2011 Many variants of the recursive net idea are possible. For example, Frasconi et al. ( ) and 1997 Frasconi 1998 et al. ( ) associate the data with a tree structure, and associate the inputs and targets with individual nodes of the tree. The computation performed by each node does not have to be the traditional artificial neuron computation (affine transformation of all inputs followed by a monotone nonlinearity). For example, ( ) propose using tensor operations Socher et al. 2013a and bilinear forms, which have previously been found useful to model relationships between concepts (Weston 2010 Bordes 2012 et al., ; et al., ) when the concepts are represented by continuous vectors (embeddings). 10.7 The Challenge of Long-Term Dependencies The mathematical challenge of learning long-term dependencies in recurrent net- works was introduced in section . The basic problem is that gradients prop- 8.2.5 agated over many stages tend to either vanish (most of the time) or explode (rarely, but with much damage to the optimization). Even if we assume that the parameters are such that the recurrent network is stable (can store memories, with gradients not exploding), the difficulty with long-term dependencies arises from the exponentially smaller weights given to long-term interactions (involving the multiplication of many Jacobians) compared to short-term ones. Many other sources provide a deeper treatment ( , ; Hochreiter 1991 Doya 1993 Bengio , ; et al., 401
  • 418. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS − − − 60 40 20 0 20 40 60 Input coordinate −4 −3 −2 −1 0 1 2 3 4 Projection of output 0 1 2 3 4 5 Figure 10.15: When composing many nonlinear functions (like the linear- tanh layer shown here), the result is highly nonlinear, typically with most of the values associated with a tiny derivative, some values with a large derivative, and many alternations between increasing and decreasing. In this plot, we plot a linear projection of a 100-dimensional hidden state down to a single dimension, plotted on the y-axis. The x-axis is the coordinate of the initial state along a random direction in the 100-dimensional space. We can thus view this plot as a linear cross-section of a high-dimensional function. The plots show the function after each time step, or equivalently, after each number of times the transition function has been composed. 1994 Pascanu 2013 ; et al., ) . In this section, we describe the problem in more detail. The remaining sections describe approaches to overcoming the problem. Recurrent networks involve the composition of the same function multiple times, once per time step. These compositions can result in extremely nonlinear behavior, as illustrated in figure . 10.15 In particular, the function composition employed by recurrent neural networks somewhat resembles matrix multiplication. We can think of the recurrence relation h( ) t = W  h( 1) t− (10.36) as a very simple recurrent neural network lacking a nonlinear activation function, and lacking inputs x. As described in section , this recurrence relation 8.2.5 essentially describes the power method. It may be simplified to h( ) t =  Wt h(0) , (10.37) and if admits an eigendecomposition of the form W W Q Q = Λ  , (10.38) 402
  • 419. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS with orthogonal , the recurrence may be simplified further to Q h( ) t = Q Λt Qh(0) . (10.39) The eigenvalues are raised to the power of t causing eigenvalues with magnitude less than one to decay to zero and eigenvalues with magnitude greater than one to explode. Any component of h(0) that is not aligned with the largest eigenvector will eventually be discarded. This problem is particular to recurrent networks. In the scalar case, imagine multiplying a weight w by itself many times. The product wt will either vanish or explode depending on the magnitude of w. However, if we make a non-recurrent network that has a different weightw( ) t at each time step, the situation is different. If the initial state is given by , then the state at time 1 t is given by  t w( ) t . Suppose that the w( ) t values are generated randomly, independently from one another, with zero mean and variance v. The variance of the product is O(vn). To obtain some desired variance v∗ we may choose the individual weights with variance v = n √ v∗. Very deep feedforward networks with carefully chosen scaling can thus avoid the vanishing and exploding gradient problem, as argued by ( ). Sussillo 2014 The vanishing and exploding gradient problem for RNNs was independently discovered by separate researchers ( , ; , , ). Hochreiter 1991 Bengio et al. 1993 1994 One may hope that the problem can be avoided simply by staying in a region of parameter space where the gradients do not vanish or explode. Unfortunately, in order to store memories in a way that is robust to small perturbations, the RNN must enter a region of parameter space where gradients vanish ( , , Bengio et al. 1993 1994). Specifically, whenever the model is able to represent long term dependencies, the gradient of a long term interaction has exponentially smaller magnitude than the gradient of a short term interaction. It does not mean that it is impossible to learn, but that it might take a very long time to learn long-term dependencies, because the signal about these dependencies will tend to be hidden by the smallest fluctuations arising from short-term dependencies. In practice, the experiments in ( ) show that as we increase the span of the dependencies that Bengio et al. 1994 need to be captured, gradient-based optimization becomes increasingly difficult, with the probability of successful training of a traditional RNN via SGD rapidly reaching 0 for sequences of only length 10 or 20. For a deeper treatment of recurrent networks as dynamical systems, see Doya ( ), ( ) and ( ), with a review 1993 Bengio et al. 1994 Siegelmann and Sontag 1995 in Pascanu 2013 et al. ( ). The remaining sections of this chapter discuss various approaches that have been proposed to reduce the difficulty of learning long- term dependencies (in some cases allowing an RNN to learn dependencies across 403
  • 420. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS hundreds of steps), but the problem of learning long-term dependencies remains one of the main challenges in deep learning. 10.8 Echo State Networks The recurrent weights mapping from h( 1) t− to h( ) t and the input weights mapping from x( ) t to h( ) t are some of the most difficult parameters to learn in a recurrent network. One proposed ( , ; , ; , ; Jaeger 2003 Maass et al. 2002 Jaeger and Haas 2004 Jaeger 2007b , ) approach to avoiding this difficulty is to set the recurrent weights such that the recurrent hidden units do a good job of capturing the history of past inputs, and learn only the output weights. This is the idea that was independently proposed for echo state networks or ESNs ( , ; , ) Jaeger and Haas 2004 Jaeger 2007b and liquid state machines ( , ). The latter is similar, except Maass et al. 2002 that it uses spiking neurons (with binary outputs) instead of the continuous-valued hidden units used for ESNs. Both ESNs and liquid state machines are termed reservoir computing (Lukoševičius and Jaeger 2009 , ) to denote the fact that the hidden units form of reservoir of temporal features which may capture different aspects of the history of inputs. One way to think about these reservoir computing recurrent networks is that they are similar to kernel machines: they map an arbitrary length sequence (the history of inputs up to time t) into a fixed-length vector (the recurrent state h( ) t ), on which a linear predictor (typically a linear regression) can be applied to solve the problem of interest. The training criterion may then be easily designed to be convex as a function of the output weights. For example, if the output consists of linear regression from the hidden units to the output targets, and the training criterion is mean squared error, then it is convex and may be solved reliably with simple learning algorithms ( , ). Jaeger 2003 The important question is therefore: how do we set the input and recurrent weights so that a rich set of histories can be represented in the recurrent neural network state? The answer proposed in the reservoir computing literature is to view the recurrent net as a dynamical system, and set the input and recurrent weights such that the dynamical system is near the edge of stability. The original idea was to make the eigenvalues of the Jacobian of the state-to- state transition function be close to . As explained in section , an important 1 8.2.5 characteristic of a recurrent network is the eigenvalue spectrum of the Jacobians J( ) t = ∂s( ) t ∂s( 1) t− . Of particular importance is the spectral radius of J( ) t , defined to be the maximum of the absolute values of its eigenvalues. 404
  • 421. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS To understand the effect of the spectral radius, consider the simple case of back-propagation with a Jacobian matrix J that does not change with t. This case happens, for example, when the network is purely linear. Suppose that J has an eigenvector v with corresponding eigenvalue λ. Consider what happens as we propagate a gradient vector backwards through time. If we begin with a gradient vector g, then after one step of back-propagation, we will have Jg, and after n steps we will have Jn g. Now consider what happens if we instead back-propagate a perturbed version of g. If we begin with g + δv, then after one step, we will have J(g + δv). After n steps, we will have J n(g + δv ). From this we can see that back-propagation starting from g and back-propagation starting from g + δv diverge by δJ nv after n steps of back-propagation. If v is chosen to be a unit eigenvector of J with eigenvalue λ, then multiplication by the Jacobian simply scales the difference at each step. The two executions of back-propagation are separated by a distance of δ λ | |n . When v corresponds to the largest value of | | λ , this perturbation achieves the widest possible separation of an initial perturbation of size . δ When | | λ > 1, the deviation size δ λ | |n grows exponentially large. When | | λ < 1, the deviation size becomes exponentially small. Of course, this example assumed that the Jacobian was the same at every time step, corresponding to a recurrent network with no nonlinearity. When a nonlinearity is present, the derivative of the nonlinearity will approach zero on many time steps, and help to prevent the explosion resulting from a large spectral radius. Indeed, the most recent work on echo state networks advocates using a spectral radius much larger than unity ( , ; , ). Yildiz et al. 2012 Jaeger 2012 Everything we have said about back-propagation via repeated matrix multipli- cation applies equally to forward propagation in a network with no nonlinearity, where the state h( +1) t = h( ) t  W . When a linear map W always shrinks h as measured by the L2 norm, then we say that the map is contractive. When the spectral radius is less than one, the mapping from h( ) t to h( +1) t is contractive, so a small change becomes smaller after each time step. This necessarily makes the network forget information about the past when we use a finite level of precision (such as 32 bit integers) to store the state vector. The Jacobian matrix tells us how a small change of h( ) t propagates one step forward, or equivalently, how the gradient on h( +1) t propagates one step backward, during back-propagation. Note that neither W nor J need to be symmetric (al- though they are square and real), so they can have complex-valued eigenvalues and eigenvectors, with imaginary components corresponding to potentially oscillatory 405
  • 422. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS behavior (if the same Jacobian was applied iteratively). Even though h( ) t or a small variation of h( ) t of interest in back-propagation are real-valued, they can be expressed in such a complex-valued basis. What matters is what happens to the magnitude (complex absolute value) of these possibly complex-valued basis coefficients, when we multiply the matrix by the vector. An eigenvalue with magnitude greater than one corresponds to magnification (exponential growth, if applied iteratively) or shrinking (exponential decay, if applied iteratively). With a nonlinear map, the Jacobian is free to change at each step. The dynamics therefore become more complicated. However, it remains true that a small initial variation can turn into a large variation after several steps. One difference between the purely linear case and the nonlinear case is that the use of a squashing nonlinearity such as tanh can cause the recurrent dynamics to become bounded. Note that it is possible for back-propagation to retain unbounded dynamics even when forward propagation has bounded dynamics, for example, when a sequence of tanh units are all in the middle of their linear regime and are connected by weight matrices with spectral radius greater than . However, it is 1 rare for all of the units to simultaneously lie at their linear activation point. tanh The strategy of echo state networks is simply to fix the weights to have some spectral radius such as , where information is carried forward through time but 3 does not explode due to the stabilizing effect of saturating nonlinearities like tanh. More recently, it has been shown that the techniques used to set the weights in ESNs could be used to the weights in a fully trainable recurrent net- initialize work (with the hidden-to-hidden recurrent weights trained using back-propagation through time), helping to learn long-term dependencies (Sutskever 2012 Sutskever , ; et al., ). In this setting, an initial spectral radius of 1.2 performs well, combined 2013 with the sparse initialization scheme described in section . 8.4 10.9 Leaky Units and Other Strategies for Multiple Time Scales One way to deal with long-term dependencies is to design a model that operates at multiple time scales, so that some parts of the model operate at fine-grained time scales and can handle small details, while other parts operate at coarse time scales and transfer information from the distant past to the present more efficiently. Various strategies for building both fine and coarse time scales are possible. These include the addition of skip connections across time, “leaky units” that integrate signals with different time constants, and the removal of some of the connections 406
  • 423. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS used to model fine-grained time scales. 10.9.1 Adding Skip Connections through Time One way to obtain coarse time scales is to add direct connections from variables in the distant past to variables in the present. The idea of using such skip connections dates back to ( ) and follows from the idea of incorporating delays in Lin et al. 1996 feedforward neural networks ( , ). In an ordinary recurrent Lang and Hinton 1988 network, a recurrent connection goes from a unit at time t to a unit at time t+ 1. It is possible to construct recurrent networks with longer delays ( , ). Bengio 1991 As we have seen in section , gradients may vanish or explode exponentially 8.2.5 with respect to the number of time steps. ( ) introduced recurrent Lin et al. 1996 connections with a time-delay of d to mitigate this problem. Gradients now diminish exponentially as a function of τ d rather than τ. Since there are both delayed and single step connections, gradients may still explode exponentially in τ. This allows the learning algorithm to capture longer dependencies although not all long-term dependencies may be represented well in this way. 10.9.2 Leaky Units and a Spectrum of Different Time Scales Another way to obtain paths on which the product of derivatives is close to one is to have units with linear self-connections and a weight near one on these connections. When we accumulate a running average µ( ) t of some value v( ) t by applying the update µ( ) t ← αµ( 1) t− + (1 − α)v( ) t the α parameter is an example of a linear self- connection from µ( 1) t− to µ( ) t . When α is near one, the running average remembers information about the past for a long time, and when α is near zero, information about the past is rapidly discarded. Hidden units with linear self-connections can behave similarly to such running averages. Such hidden units are called leaky units. Skip connections through d time steps are a way of ensuring that a unit can always learn to be influenced by a value from d time steps earlier. The use of a linear self-connection with a weight near one is a different way of ensuring that the unit can access values from the past. The linear self-connection approach allows this effect to be adapted more smoothly and flexibly by adjusting the real-valued α rather than by adjusting the integer-valued skip length. These ideas were proposed by ( ) and by ( ). Mozer 1992 El Hihi and Bengio 1996 Leaky units were also found to be useful in the context of echo state networks ( , ). Jaeger et al. 2007 407
  • 424. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS There are two basic strategies for setting the time constants used by leaky units. One strategy is to manually fix them to values that remain constant, for example by sampling their values from some distribution once at initialization time. Another strategy is to make the time constants free parameters and learn them. Having such leaky units at different time scales appears to help with long-term dependencies ( , ; Mozer 1992 Pascanu 2013 et al., ). 10.9.3 Removing Connections Another approach to handle long-term dependencies is the idea of organizing the state of the RNN at multiple time-scales ( , ), with El Hihi and Bengio 1996 information flowing more easily through long distances at the slower time scales. This idea differs from the skip connections through time discussed earlier because it involves actively removing length-one connections and replacing them with longer connections. Units modified in such a way are forced to operate on a long time scale. Skip connections through time edges. Units receiving such add new connections may learn to operate on a long time scale but may also choose to focus on their other short-term connections. There are different ways in which a group of recurrent units can be forced to operate at different time scales. One option is to make the recurrent units leaky, but to have different groups of units associated with different fixed time scales. This was the proposal in ( ) and has been successfully used in Mozer 1992 Pascanu et al. ( ). Another option is to have explicit and discrete updates taking place 2013 at different times, with a different frequency for different groups of units. This is the approach of ( ) and El Hihi and Bengio 1996 Koutnik 2014 et al. ( ). It worked well on a number of benchmark datasets. 10.10 The Long Short-Term Memory and Other Gated RNNs As of this writing, the most effective sequence models used in practical applications are called gated RNNs. These include the long short-term memory and networks based on the . gated recurrent unit Like leaky units, gated RNNs are based on the idea of creating paths through time that have derivatives that neither vanish nor explode. Leaky units did this with connection weights that were either manually chosen constants or were parameters. Gated RNNs generalize this to connection weights that may change 408
  • 425. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS at each time step. × input input gate forget gate output gate output state self-loop × + × Figure 10.16: Block diagram of the LSTM recurrent network “cell.” Cells are connected recurrently to each other, replacing the usual hidden units of ordinary recurrent networks. An input feature is computed with a regular artificial neuron unit. Its value can be accumulated into the state if the sigmoidal input gate allows it. The state unit has a linear self-loop whose weight is controlled by the forget gate. The output of the cell can be shut off by the output gate. All the gating units have a sigmoid nonlinearity, while the input unit can have any squashing nonlinearity. The state unit can also be used as an extra input to the gating units. The black square indicates a delay of a single time step. Leaky units allow the network to accumulate information (such as evidence for a particular feature or category) over a long duration. However, once that information has been used, it might be useful for the neural network to forget the old state. For example, if a sequence is made of sub-sequences and we want a leaky unit to accumulate evidence inside each sub-subsequence, we need a mechanism to forget the old state by setting it to zero. Instead of manually deciding when to clear the state, we want the neural network to learn to decide when to do it. This 409
  • 426. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS is what gated RNNs do. 10.10.1 LSTM The clever idea of introducing self-loops to produce paths where the gradient can flow for long durations is a core contribution of the initial long short-term memory (LSTM) model (Hochreiter and Schmidhuber 1997 , ). A crucial addition has been to make the weight on this self-loop conditioned on the context, rather than fixed ( , ). By making the weight of this self-loop gated (controlled Gers et al. 2000 by another hidden unit), the time scale of integration can be changed dynamically. In this case, we mean that even for an LSTM with fixed parameters, the time scale of integration can change based on the input sequence, because the time constants are output by the model itself. The LSTM has been found extremely successful in many applications, such as unconstrained handwriting recognition (Graves et al., ), speech recognition ( 2009 Graves 2013 Graves and Jaitly 2014 et al., ; , ), handwriting generation (Graves 2013 , ), machine translation (Sutskever 2014 et al., ), image captioning ( , ; Kiros et al. 2014b Vinyals 2014b Xu 2015 et al., ; et al., ) and parsing (Vinyals 2014a et al., ). The LSTM block diagram is illustrated in figure . The corresponding 10.16 forward propagation equations are given below, in the case of a shallow recurrent network architecture. Deeper architectures have also been successfully used (Graves et al., ; 2013 Pascanu 2014a et al., ). Instead of a unit that simply applies an element- wise nonlinearity to the affine transformation of inputs and recurrent units, LSTM recurrent networks have “LSTM cells” that have an internal recurrence (a self-loop), in addition to the outer recurrence of the RNN. Each cell has the same inputs and outputs as an ordinary recurrent network, but has more parameters and a system of gating units that controls the flow of information. The most important component is the state unit s ( ) t i that has a linear self-loop similar to the leaky units described in the previous section. However, here, the self-loop weight (or the associated time constant) is controlled by a forget gate unit f ( ) t i (for time step t and cell ), that sets this weight to a value between 0 and 1 via a sigmoid unit: i f ( ) t i = σ  bf i +  j Uf i,j x ( ) t j +  j Wf i,j h ( 1) t− j  , (10.40) where x( ) t is the current input vector and h( ) t is the current hidden layer vector, containing the outputs of all the LSTM cells, and bf ,Uf , Wf are respectively biases, input weights and recurrent weights for the forget gates. The LSTM cell 410
  • 427. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS internal state is thus updated as follows, but with a conditional self-loop weight f( ) t i : s( ) t i = f( ) t i s( 1) t− i + g( ) t i σ  bi +  j Ui,jx( ) t j +  j Wi,jh( 1) t− j   , (10.41) where b, U and W respectively denote the biases, input weights and recurrent weights into the LSTM cell. The external input gate unit g ( ) t i is computed similarly to the forget gate (with a sigmoid unit to obtain a gating value between 0 and 1), but with its own parameters: g ( ) t i = σ  bg i +  j Ug i,jx ( ) t j +  j Wg i,jh ( 1) t− j   . (10.42) The output h ( ) t i of the LSTM cell can also be shut off, via the output gate q ( ) t i , which also uses a sigmoid unit for gating: h ( ) t i = tanh  s ( ) t i  q ( ) t i (10.43) q( ) t i = σ  bo i +  j Uo i,jx( ) t j +  j W o i,j h ( 1) t− j   (10.44) which has parameters bo , U o , W o for its biases, input weights and recurrent weights, respectively. Among the variants, one can choose to use the cell state s ( ) t i as an extra input (with its weight) into the three gates of the i-th unit, as shown in figure . This would require three additional parameters. 10.16 LSTM networks have been shown to learn long-term dependencies more easily than the simple recurrent architectures, first on artificial data sets designed for testing the ability to learn long-term dependencies ( , ; Bengio et al. 1994 Hochreiter and Schmidhuber 1997 Hochreiter 2001 , ; et al., ), then on challenging sequence processing tasks where state-of-the-art performance was obtained (Graves 2012 , ; Graves 2013 Sutskever 2014 et al., ; et al., ). Variants and alternatives to the LSTM have been studied and used and are discussed next. 10.10.2 Other Gated RNNs Which pieces of the LSTM architecture are actually necessary? What other successful architectures could be designed that allow the network to dynamically control the time scale and forgetting behavior of different units? 411
  • 428. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS Some answers to these questions are given with the recent work on gated RNNs, whose units are also known as gated recurrent units or GRUs ( , ; Cho et al. 2014b Chung 2014 2015a Jozefowicz 2015 Chrupala 2015 et al., , ; et al., ; et al., ). The main difference with the LSTM is that a single gating unit simultaneously controls the forgetting factor and the decision to update the state unit. The update equations are the following: h ( ) t i = u ( 1) t− i h ( 1) t− i + (1 − u( 1) t− i )σ  bi +  j Ui,jx ( 1) t− j +  j Wi,j r ( 1) t− j h ( 1) t− j  , (10.45) where u stands for “update” gate and r for “reset” gate. Their value is defined as usual: u ( ) t i = σ  bu i +  j Uu i,jx ( ) t j +  j Wu i,jh ( ) t j   (10.46) and r( ) t i = σ  br i +  j U r i,jx( ) t j +  j Wr i,jh( ) t j   . (10.47) The reset and updates gates can individually “ignore” parts of the state vector. The update gates act like conditional leaky integrators that can linearly gate any dimension, thus choosing to copy it (at one extreme of the sigmoid) or completely ignore it (at the other extreme) by replacing it by the new “target state” value (towards which the leaky integrator wants to converge). The reset gates control which parts of the state get used to compute the next target state, introducing an additional nonlinear effect in the relationship between past state and future state. Many more variants around this theme can be designed. For example the reset gate (or forget gate) output could be shared across multiple hidden units. Alternately, the product of a global gate (covering a whole group of units, such as an entire layer) and a local gate (per unit) could be used to combine global control and local control. However, several investigations over architectural variations of the LSTM and GRU found no variant that would clearly beat both of these across a wide range of tasks ( , ; Greff et al. 2015 Jozefowicz 2015 Greff et al., ). et al. ( ) found that a crucial ingredient is the forget gate, while 2015 Jozefowicz et al. ( ) found that adding a bias of 1 to the LSTM forget gate, a practice 2015 advocated by ( ), makes the LSTM as strong as the best of the Gers et al. 2000 explored architectural variants. 412
  • 429. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS 10.11 Optimization for Long-Term Dependencies Section and section have described the vanishing and exploding gradient 8.2.5 10.7 problems that occur when optimizing RNNs over many time steps. An interesting idea proposed by Martens and Sutskever 2011 ( ) is that second derivatives may vanish at the same time that first derivatives vanish. Second-order optimization algorithms may roughly be understood as dividing the first derivative by the second derivative (in higher dimension, multiplying the gradient by the inverse Hessian). If the second derivative shrinks at a similar rate to the first derivative, then the ratio of first and second derivatives may remain relatively constant. Unfortunately, second-order methods have many drawbacks, including high computational cost, the need for a large minibatch, and a tendency to be attracted to saddle points. Martens and Sutskever 2011 ( ) found promising results using second-order methods. Later, Sutskever 2013 et al. ( ) found that simpler methods such as Nesterov momentum with careful initialization could achieve similar results. See Sutskever 2012 ( ) for more detail. Both of these approaches have largely been replaced by simply using SGD (even without momentum) applied to LSTMs. This is part of a continuing theme in machine learning that it is often much easier to design a model that is easy to optimize than it is to design a more powerful optimization algorithm. 10.11.1 Clipping Gradients As discussed in section , strongly nonlinear functions such as those computed 8.2.4 by a recurrent net over many time steps tend to have derivatives that can be either very large or very small in magnitude. This is illustrated in figure and 8.3 figure , in which we see that the objective function (as a function of the 10.17 parameters) has a “landscape” in which one finds “cliffs”: wide and rather flat regions separated by tiny regions where the objective function changes quickly, forming a kind of cliff. The difficulty that arises is that when the parameter gradient is very large, a gradient descent parameter update could throw the parameters very far, into a region where the objective function is larger, undoing much of the work that had been done to reach the current solution. The gradient tells us the direction that corresponds to the steepest descent within an infinitesimal region surrounding the current parameters. Outside of this infinitesimal region, the cost function may begin to curve back upwards. The update must be chosen to be small enough to avoid traversing too much upward curvature. We typically use learning rates that 413
  • 430. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS decay slowly enough that consecutive steps have approximately the same learning rate. A step size that is appropriate for a relatively linear part of the landscape is often inappropriate and causes uphill motion if we enter a more curved part of the landscape on the next step.               Figure 10.17: Example of the effect of gradient clipping in a recurrent network with two parameters w and b. Gradient clipping can make gradient descent perform more reasonably in the vicinity of extremely steep cliffs. These steep cliffs commonly occur in recurrent networks near where a recurrent network behaves approximately linearly. The cliff is exponentially steep in the number of time steps because the weight matrix is multiplied by itself once for each time step. (Left)Gradient descent without gradient clipping overshoots the bottom of this small ravine, then receives a very large gradient from the cliff face. The large gradient catastrophically propels the parameters outside the axes of the plot. Gradient descent with gradient clipping has a more moderate (Right) reaction to the cliff. While it does ascend the cliff face, the step size is restricted so that it cannot be propelled away from steep region near the solution. Figure adapted with permission from Pascanu 2013 et al. ( ). A simple type of solution has been in use by practitioners for many years: clipping the gradient. There are different instances of this idea (Mikolov 2012 , ; Pascanu 2013 et al., ). One option is to clip the parameter gradient from a minibatch element-wise (Mikolov 2012 , ) just before the parameter update. Another is to clip the norm || || g of the gradient g (Pascanu 2013 et al., ) just before the parameter update: if || || g > v (10.48) g ← gv || || g (10.49) 414
  • 431. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS where v is the norm threshold and g is used to update parameters. Because the gradient of all the parameters (including different groups of parameters, such as weights and biases) is renormalized jointly with a single scaling factor, the latter method has the advantage that it guarantees that each step is still in the gradient direction, but experiments suggest that both forms work similarly. Although the parameter update has the same direction as the true gradient, with gradient norm clipping, the parameter update vector norm is now bounded. This bounded gradient avoids performing a detrimental step when the gradient explodes. In fact, even simply taking a random step when the gradient magnitude is above a threshold tends to work almost as well. If the explosion is so severe that the gradient is numerically Inf or Nan (considered infinite or not-a-number), then a random step of size v can be taken and will typically move away from the numerically unstable configuration. Clipping the gradient norm per-minibatch will not change the direction of the gradient for an individual minibatch. However, taking the average of the norm-clipped gradient from many minibatches is not equivalent to clipping the norm of the true gradient (the gradient formed from using all examples). Examples that have large gradient norm, as well as examples that appear in the same minibatch as such examples, will have their contribution to the final direction diminished. This stands in contrast to traditional minibatch gradient descent, where the true gradient direction is equal to the average over all minibatch gradients. Put another way, traditional stochastic gradient descent uses an unbiased estimate of the gradient, while gradient descent with norm clipping introduces a heuristic bias that we know empirically to be useful. With element- wise clipping, the direction of the update is not aligned with the true gradient or the minibatch gradient, but it is still a descent direction. It has also been proposed (Graves 2013 , ) to clip the back-propagated gradient (with respect to hidden units) but no comparison has been published between these variants; we conjecture that all these methods behave similarly. 10.11.2 Regularizing to Encourage Information Flow Gradient clipping helps to deal with exploding gradients, but it does not help with vanishing gradients. To address vanishing gradients and better capture long-term dependencies, we discussed the idea of creating paths in the computational graph of the unfolded recurrent architecture along which the product of gradients associated with arcs is near 1. One approach to achieve this is with LSTMs and other self- loops and gating mechanisms, described above in section . Another idea is 10.10 to regularize or constrain the parameters so as to encourage “information flow.” In particular, we would like the gradient vector ∇h( ) t L being back-propagated to 415
  • 432. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS maintain its magnitude, even if the loss function only penalizes the output at the end of the sequence. Formally, we want (∇h( ) t L) ∂h( ) t ∂h( 1) t− (10.50) to be as large as ∇h( ) t L. (10.51) With this objective, Pascanu 2013 et al. ( ) propose the following regularizer: Ω =  t     | ∇ ( h( ) t L) ∂h( ) t ∂h( 1) t−    | ||∇h( ) t L| | − 1   2 . (10.52) Computing the gradient of this regularizer may appear difficult, but Pascanu et al. ( ) propose an approximation in which we consider the back-propagated 2013 vectors ∇h( ) t L as if they were constants (for the purpose of this regularizer, so that there is no need to back-propagate through them). The experiments with this regularizer suggest that, if combined with the norm clipping heuristic (which handles gradient explosion), the regularizer can considerably increase the span of the dependencies that an RNN can learn. Because it keeps the RNN dynamics on the edge of explosive gradients, the gradient clipping is particularly important. Without gradient clipping, gradient explosion prevents learning from succeeding. A key weakness of this approach is that it is not as effective as the LSTM for tasks where data is abundant, such as language modeling. 10.12 Explicit Memory Intelligence requires knowledge and acquiring knowledge can be done via learning, which has motivated the development of large-scale deep architectures. However, there are different kinds of knowledge. Some knowledge can be implicit, sub- conscious, and difficult to verbalize—such as how to walk, or how a dog looks different from a cat. Other knowledge can be explicit, declarative, and relatively straightforward to put into words—every day commonsense knowledge, like “a cat is a kind of animal,” or very specific facts that you need to know to accomplish your current goals, like “the meeting with the sales team is at 3:00 PM in room 141.” Neural networks excel at storing implicit knowledge. However, they struggle to memorize facts. Stochastic gradient descent requires many presentations of the 416
  • 433. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS Task network, controlling the memory Memory cells Writing mechanism Reading mechanism Figure 10.18: A schematic of an example of a network with an explicit memory, capturing some of the key design elements of the neural Turing machine. In this diagram we distinguish the “representation” part of the model (the “task network,” here a recurrent net in the bottom) from the “memory” part of the model (the set of cells), which can store facts. The task network learns to “control” the memory, deciding where to read from and where to write to within the memory (through the reading and writing mechanisms, indicated by bold arrows pointing at the reading and writing addresses). 417
  • 434. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS same input before it can be stored in a neural network parameters, and even then, that input will not be stored especially precisely. Graves 2014b et al. ( ) hypothesized that this is because neural networks lack the equivalent of the working memory system that allows human beings to explicitly hold and manipulate pieces of information that are relevant to achieving some goal. Such explicit memory components would allow our systems not only to rapidly and “intentionally” store and retrieve specific facts but also to sequentially reason with them. The need for neural networks that can process information in a sequence of steps, changing the way the input is fed into the network at each step, has long been recognized as important for the ability to reason rather than to make automatic, intuitive responses to the input ( , ). Hinton 1990 To resolve this difficulty, Weston 2014 et al. ( ) introduced memory networks that include a set of memory cells that can be accessed via an addressing mecha- nism. Memory networks originally required a supervision signal instructing them how to use their memory cells. Graves 2014b et al. ( ) introduced the neural Turing machine, which is able to learn to read from and write arbitrary content to memory cells without explicit supervision about which actions to undertake, and allowed end-to-end training without this supervision signal, via the use of a content-based soft attention mechanism (see ( ) and sec- Bahdanau et al. 2015 tion ). This soft addressing mechanism has become standard with other 12.4.5.1 related architectures emulating algorithmic mechanisms in a way that still allows gradient-based optimization ( , ; Sukhbaatar et al. 2015 Joulin and Mikolov 2015 , ; Kumar 2015 Vinyals 2015a Grefenstette 2015 et al., ; et al., ; et al., ). Each memory cell can be thought of as an extension of the memory cells in LSTMs and GRUs. The difference is that the network outputs an internal state that chooses which cell to read from or write to, just as memory accesses in a digital computer read from or write to a specific address. It is difficult to optimize functions that produce exact, integer addresses. To alleviate this problem, NTMs actually read to or write from many memory cells simultaneously. To read, they take a weighted average of many cells. To write, they modify multiple cells by different amounts. The coefficients for these operations are chosen to be focused on a small number of cells, for example, by producing them via a softmax function. Using these weights with non-zero derivatives allows the functions controlling access to the memory to be optimized using gradient descent. The gradient on these coefficients indicates whether each of them should be increased or decreased, but the gradient will typically be large only for those memory addresses receiving a large coefficient. These memory cells are typically augmented to contain a vector, rather than 418
  • 435. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS the single scalar stored by an LSTM or GRU memory cell. There are two reasons to increase the size of the memory cell. One reason is that we have increased the cost of accessing a memory cell. We pay the computational cost of producing a coefficient for many cells, but we expect these coefficients to cluster around a small number of cells. By reading a vector value, rather than a scalar value, we can offset some of this cost. Another reason to use vector-valued memory cells is that they allow for content-based addressing, where the weight used to read to or write from a cell is a function of that cell. Vector-valued cells allow us to retrieve a complete vector-valued memory if we are able to produce a pattern that matches some but not all of its elements. This is analogous to the way that people can recall the lyrics of a song based on a few words. We can think of a content-based read instruction as saying, “Retrieve the lyrics of the song that has the chorus ‘We all live in a yellow submarine.’ ” Content-based addressing is more useful when we make the objects to be retrieved large—if every letter of the song was stored in a separate memory cell, we would not be able to find them this way. By comparison, location-based addressing is not allowed to refer to the content of the memory. We can think of a location-based read instruction as saying “Retrieve the lyrics of the song in slot 347.” Location-based addressing can often be a perfectly sensible mechanism even when the memory cells are small. If the content of a memory cell is copied (not forgotten) at most time steps, then the information it contains can be propagated forward in time and the gradients propagated backward in time without either vanishing or exploding. The explicit memory approach is illustrated in figure , where we see that 10.18 a “task neural network” is coupled with a memory. Although that task neural network could be feedforward or recurrent, the overall system is a recurrent network. The task network can choose to read from or write to specific memory addresses. Explicit memory seems to allow models to learn tasks that ordinary RNNs or LSTM RNNs cannot learn. One reason for this advantage may be because information and gradients can be propagated (forward in time or backwards in time, respectively) for very long durations. As an alternative to back-propagation through weighted averages of memory cells, we can interpret the memory addressing coefficients as probabilities and stochastically read just one cell (Zaremba and Sutskever 2015 , ). Optimizing models that make discrete decisions requires specialized optimization algorithms, described in section . So far, training these stochastic architectures that make discrete 20.9.1 decisions remains harder than training deterministic algorithms that make soft decisions. Whether it is soft (allowing back-propagation) or stochastic and hard, the 419
  • 436. CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS mechanism for choosing an address is in its form identical to the attention mechanism which had been previously introduced in the context of machine translation ( , ) and discussed in section . The idea Bahdanau et al. 2015 12.4.5.1 of attention mechanisms for neural networks was introduced even earlier, in the context of handwriting generation (Graves 2013 , ), with an attention mechanism that was constrained to move only forward in time through the sequence. In the case of machine translation and memory networks, at each step, the focus of attention can move to a completely different place, compared to the previous step. Recurrent neural networks provide a way to extend deep learning to sequential data. They are the last major tool in our deep learning toolbox. Our discussion now moves to how to choose and use these tools and how to apply them to real-world tasks. 420
  • 437. Chapter 11 Practical Methodology Successfully applying deep learning techniques requires more than just a good knowledge of what algorithms exist and the principles that explain how they work. A good machine learning practitioner also needs to know how to choose an algorithm for a particular application and how to monitor and respond to feedback obtained from experiments in order to improve a machine learning system. During day to day development of machine learning systems, practitioners need to decide whether to gather more data, increase or decrease model capacity, add or remove regularizing features, improve the optimization of a model, improve approximate inference in a model, or debug the software implementation of the model. All of these operations are at the very least time-consuming to try out, so it is important to be able to determine the right course of action rather than blindly guessing. Most of this book is about different machine learning models, training algo- rithms, and objective functions. This may give the impression that the most important ingredient to being a machine learning expert is knowing a wide variety of machine learning techniques and being good at different kinds of math. In prac- tice, one can usually do much better with a correct application of a commonplace algorithm than by sloppily applying an obscure algorithm. Correct application of an algorithm depends on mastering some fairly simple methodology. Many of the recommendations in this chapter are adapted from ( ). Ng 2015 We recommend the following practical design process: • Determine your goals—what error metric to use, and your target value for this error metric. These goals and error metrics should be driven by the problem that the application is intended to solve. • Establish a working end-to-end pipeline as soon as possible, including the 421
  • 438. CHAPTER 11. PRACTICAL METHODOLOGY estimation of the appropriate performance metrics. • Instrument the system well to determine bottlenecks in performance. Diag- nose which components are performing worse than expected and whether it is due to overfitting, underfitting, or a defect in the data or software. • Repeatedly make incremental changes such as gathering new data, adjusting hyperparameters, or changing algorithms, based on specific findings from your instrumentation. As a running example, we will use Street View address number transcription system ( , ). The purpose of this application is to add Goodfellow et al. 2014d buildings to Google Maps. Street View cars photograph the buildings and record the GPS coordinates associated with each photograph. A convolutional network recognizes the address number in each photograph, allowing the Google Maps database to add that address in the correct location. The story of how this commercial application was developed gives an example of how to follow the design methodology we advocate. We now describe each of the steps in this process. 11.1 Performance Metrics Determining your goals, in terms of which error metric to use, is a necessary first step because your error metric will guide all of your future actions. You should also have an idea of what level of performance you desire. Keep in mind that for most applications, it is impossible to achieve absolute zero error. The Bayes error defines the minimum error rate that you can hope to achieve, even if you have infinite training data and can recover the true probability distribution. This is because your input features may not contain complete information about the output variable, or because the system might be intrinsically stochastic. You will also be limited by having a finite amount of training data. The amount of training data can be limited for a variety of reasons. When your goal is to build the best possible real-world product or service, you can typically collect more data but must determine the value of reducing error further and weigh this against the cost of collecting more data. Data collection can require time, money, or human suffering (for example, if your data collection process involves performing invasive medical tests). When your goal is to answer a scientific question about which algorithm performs better on a fixed benchmark, the benchmark 422
  • 439. CHAPTER 11. PRACTICAL METHODOLOGY specification usually determines the training set and you are not allowed to collect more data. How can one determine a reasonable level of performance to expect? Typically, in the academic setting, we have some estimate of the error rate that is attainable based on previously published benchmark results. In the real-word setting, we have some idea of the error rate that is necessary for an application to be safe, cost-effective, or appealing to consumers. Once you have determined your realistic desired error rate, your design decisions will be guided by reaching this error rate. Another important consideration besides the target value of the performance metric is the choice of which metric to use. Several different performance metrics may be used to measure the effectiveness of a complete application that includes machine learning components. These performance metrics are usually different from the cost function used to train the model. As described in section , it is 5.1.2 common to measure the accuracy, or equivalently, the error rate, of a system. However, many applications require more advanced metrics. Sometimes it is much more costly to make one kind of a mistake than another. For example, an e-mail spam detection system can make two kinds of mistakes: incorrectly classifying a legitimate message as spam, and incorrectly allowing a spam message to appear in the inbox. It is much worse to block a legitimate message than to allow a questionable message to pass through. Rather than measuring the error rate of a spam classifier, we may wish to measure some form of total cost, where the cost of blocking legitimate messages is higher than the cost of allowing spam messages. Sometimes we wish to train a binary classifier that is intended to detect some rare event. For example, we might design a medical test for a rare disease. Suppose that only one in every million people has this disease. We can easily achieve 99.9999% accuracy on the detection task, by simply hard-coding the classifier to always report that the disease is absent. Clearly, accuracy is a poor way to characterize the performance of such a system. One way to solve this problem is to instead measure precision and recall. Precision is the fraction of detections reported by the model that were correct, while recall is the fraction of true events that were detected. A detector that says no one has the disease would achieve perfect precision, but zero recall. A detector that says everyone has the disease would achieve perfect recall, but precision equal to the percentage of people who have the disease (0.0001% in our example of a disease that only one people in a million have). When using precision and recall, it is common to plot a PR curve, with precision on the y-axis and recall on the x-axis. The classifier generates a score that is higher if the event to be detected occurred. For example, a feedforward 423
  • 440. CHAPTER 11. PRACTICAL METHODOLOGY network designed to detect a disease outputs ŷ = P (y = 1 | x), estimating the probability that a person whose medical results are described by features x has the disease. We choose to report a detection whenever this score exceeds some threshold. By varying the threshold, we can trade precision for recall. In many cases, we wish to summarize the performance of the classifier with a single number rather than a curve. To do so, we can convert precision p and recall r into an F-score given by F = 2pr p r + . (11.1) Another option is to report the total area lying beneath the PR curve. In some applications, it is possible for the machine learning system to refuse to make a decision. This is useful when the machine learning algorithm can estimate how confident it should be about a decision, especially if a wrong decision can be harmful and if a human operator is able to occasionally take over. The Street View transcription system provides an example of this situation. The task is to transcribe the address number from a photograph in order to associate the location where the photo was taken with the correct address in a map. Because the value of the map degrades considerably if the map is inaccurate, it is important to add an address only if the transcription is correct. If the machine learning system thinks that it is less likely than a human being to obtain the correct transcription, then the best course of action is to allow a human to transcribe the photo instead. Of course, the machine learning system is only useful if it is able to dramatically reduce the amount of photos that the human operators must process. A natural performance metric to use in this situation is coverage. Coverage is the fraction of examples for which the machine learning system is able to produce a response. It is possible to trade coverage for accuracy. One can always obtain 100% accuracy by refusing to process any example, but this reduces the coverage to 0%. For the Street View task, the goal for the project was to reach human-level transcription accuracy while maintaining 95% coverage. Human-level performance on this task is 98% accuracy. Many other metrics are possible. We can for example, measure click-through rates, collect user satisfaction surveys, and so on. Many specialized application areas have application-specific criteria as well. What is important is to determine which performance metric to improve ahead of time, then concentrate on improving this metric. Without clearly defined goals, it can be difficult to tell whether changes to a machine learning system make progress or not. 424
  • 441. CHAPTER 11. PRACTICAL METHODOLOGY 11.2 Default Baseline Models After choosing performance metrics and goals, the next step in any practical application is to establish a reasonable end-to-end system as soon as possible. In this section, we provide recommendations for which algorithms to use as the first baseline approach in various situations. Keep in mind that deep learning research progresses quickly, so better default algorithms are likely to become available soon after this writing. Depending on the complexity of your problem, you may even want to begin without using deep learning. If your problem has a chance of being solved by just choosing a few linear weights correctly, you may want to begin with a simple statistical model like logistic regression. If you know that your problem falls into an “AI-complete” category like object recognition, speech recognition, machine translation, and so on, then you are likely to do well by beginning with an appropriate deep learning model. First, choose the general category of model based on the structure of your data. If you want to perform supervised learning with fixed-size vectors as input, use a feedforward network with fully connected layers. If the input has known topological structure (for example, if the input is an image), use a convolutional network. In these cases, you should begin by using some kind of piecewise linear unit (ReLUs or their generalizations like Leaky ReLUs, PreLus and maxout). If your input or output is a sequence, use a gated recurrent net (LSTM or GRU). A reasonable choice of optimization algorithm is SGD with momentum with a decaying learning rate (popular decay schemes that perform better or worse on different problems include decaying linearly until reaching a fixed minimum learning rate, decaying exponentially, or decreasing the learning rate by a factor of 2-10 each time validation error plateaus). Another very reasonable alternative is Adam. Batch normalization can have a dramatic effect on optimization performance, especially for convolutional networks and networks with sigmoidal nonlinearities. While it is reasonable to omit batch normalization from the very first baseline, it should be introduced quickly if optimization appears to be problematic. Unless your training set contains tens of millions of examples or more, you should include some mild forms of regularization from the start. Early stopping should be used almost universally. Dropout is an excellent regularizer that is easy to implement and compatible with many models and training algorithms. Batch normalization also sometimes reduces generalization error and allows dropout to be omitted, due to the noise in the estimate of the statistics used to normalize each variable. 425
  • 442. CHAPTER 11. PRACTICAL METHODOLOGY If your task is similar to another task that has been studied extensively, you will probably do well by first copying the model and algorithm that is already known to perform best on the previously studied task. You may even want to copy a trained model from that task. For example, it is common to use the features from a convolutional network trained on ImageNet to solve other computer vision tasks ( , ). Girshick et al. 2015 A common question is whether to begin by using unsupervised learning, de- scribed further in part . This is somewhat domain specific. Some domains, such III as natural language processing, are known to benefit tremendously from unsuper- vised learning techniques such as learning unsupervised word embeddings. In other domains, such as computer vision, current unsupervised learning techniques do not bring a benefit, except in the semi-supervised setting, when the number of labeled examples is very small ( , ; Kingma et al. 2014 Rasmus 2015 et al., ). If your application is in a context where unsupervised learning is known to be important, then include it in your first end-to-end baseline. Otherwise, only use unsupervised learning in your first attempt if the task you want to solve is unsupervised. You can always try adding unsupervised learning later if you observe that your initial baseline overfits. 11.3 Determining Whether to Gather More Data After the first end-to-end system is established, it is time to measure the perfor- mance of the algorithm and determine how to improve it. Many machine learning novices are tempted to make improvements by trying out many different algorithms. However, it is often much better to gather more data than to improve the learning algorithm. How does one decide whether to gather more data? First, determine whether the performance on the training set is acceptable. If performance on the training set is poor, the learning algorithm is not using the training data that is already available, so there is no reason to gather more data. Instead, try increasing the size of the model by adding more layers or adding more hidden units to each layer. Also, try improving the learning algorithm, for example by tuning the learning rate hyperparameter. If large models and carefully tuned optimization algorithms do not work well, then the problem might be the of the training data. The quality data may be too noisy or may not include the right inputs needed to predict the desired outputs. This suggests starting over, collecting cleaner data or collecting a richer set of features. If the performance on the training set is acceptable, then measure the per- 426
  • 443. CHAPTER 11. PRACTICAL METHODOLOGY formance on a test set. If the performance on the test set is also acceptable, then there is nothing left to be done. If test set performance is much worse than training set performance, then gathering more data is one of the most effective solutions. The key considerations are the cost and feasibility of gathering more data, the cost and feasibility of reducing the test error by other means, and the amount of data that is expected to be necessary to improve test set performance significantly. At large internet companies with millions or billions of users, it is feasible to gather large datasets, and the expense of doing so can be considerably less than the other alternatives, so the answer is almost always to gather more training data. For example, the development of large labeled datasets was one of the most important factors in solving object recognition. In other contexts, such as medical applications, it may be costly or infeasible to gather more data. A simple alternative to gathering more data is to reduce the size of the model or improve regularization, by adjusting hyperparameters such as weight decay coefficients, or by adding regularization strategies such as dropout. If you find that the gap between train and test performance is still unacceptable even after tuning the regularization hyperparameters, then gathering more data is advisable. When deciding whether to gather more data, it is also necessary to decide how much to gather. It is helpful to plot curves showing the relationship between training set size and generalization error, like in figure . By extrapolating such 5.4 curves, one can predict how much additional training data would be needed to achieve a certain level of performance. Usually, adding a small fraction of the total number of examples will not have a noticeable impact on generalization error. It is therefore recommended to experiment with training set sizes on a logarithmic scale, for example doubling the number of examples between consecutive experiments. If gathering much more data is not feasible, the only other way to improve generalization error is to improve the learning algorithm itself. This becomes the domain of research and not the domain of advice for applied practitioners. 11.4 Selecting Hyperparameters Most deep learning algorithms come with many hyperparameters that control many aspects of the algorithm’s behavior. Some of these hyperparameters affect the time and memory cost of running the algorithm. Some of these hyperparameters affect the quality of the model recovered by the training process and its ability to infer correct results when deployed on new inputs. There are two basic approaches to choosing these hyperparameters: choosing them manually and choosing them automatically. Choosing the hyperparameters 427
  • 444. CHAPTER 11. PRACTICAL METHODOLOGY manually requires understanding what the hyperparameters do and how machine learning models achieve good generalization. Automatic hyperparameter selection algorithms greatly reduce the need to understand these ideas, but they are often much more computationally costly. 11.4.1 Manual Hyperparameter Tuning To set hyperparameters manually, one must understand the relationship between hyperparameters, training error, generalization error and computational resources (memory and runtime). This means establishing a solid foundation on the fun- damental ideas concerning the effective capacity of a learning algorithm from chapter . 5 The goal of manual hyperparameter search is usually to find the lowest general- ization error subject to some runtime and memory budget. We do not discuss how to determine the runtime and memory impact of various hyperparameters here because this is highly platform-dependent. The primary goal of manual hyperparameter search is to adjust the effective capacity of the model to match the complexity of the task. Effective capacity is constrained by three factors: the representational capacity of the model, the ability of the learning algorithm to successfully minimize the cost function used to train the model, and the degree to which the cost function and training procedure regularize the model. A model with more layers and more hidden units per layer has higher representational capacity—it is capable of representing more complicated functions. It can not necessarily actually learn all of these functions though, if the training algorithm cannot discover that certain functions do a good job of minimizing the training cost, or if regularization terms such as weight decay forbid some of these functions. The generalization error typically follows a U-shaped curve when plotted as a function of one of the hyperparameters, as in figure . At one extreme, the 5.3 hyperparameter value corresponds to low capacity, and generalization error is high because training error is high. This is the underfitting regime. At the other extreme, the hyperparameter value corresponds to high capacity, and the generalization error is high because the gap between training and test error is high. Somewhere in the middle lies the optimal model capacity, which achieves the lowest possible generalization error, by adding a medium generalization gap to a medium amount of training error. For some hyperparameters, overfitting occurs when the value of the hyper- parameter is large. The number of hidden units in a layer is one such example, 428
  • 445. CHAPTER 11. PRACTICAL METHODOLOGY because increasing the number of hidden units increases the capacity of the model. For some hyperparameters, overfitting occurs when the value of the hyperparame- ter is small. For example, the smallest allowable weight decay coefficient of zero corresponds to the greatest effective capacity of the learning algorithm. Not every hyperparameter will be able to explore the entire U-shaped curve. Many hyperparameters are discrete, such as the number of units in a layer or the number of linear pieces in a maxout unit, so it is only possible to visit a few points along the curve. Some hyperparameters are binary. Usually these hyperparameters are switches that specify whether or not to use some optional component of the learning algorithm, such as a preprocessing step that normalizes the input features by subtracting their mean and dividing by their standard deviation. These hyperparameters can only explore two points on the curve. Other hyperparameters have some minimum or maximum value that prevents them from exploring some part of the curve. For example, the minimum weight decay coefficient is zero. This means that if the model is underfitting when weight decay is zero, we can not enter the overfitting region by modifying the weight decay coefficient. In other words, some hyperparameters can only subtract capacity. The learning rate is perhaps the most important hyperparameter. If you have time to tune only one hyperparameter, tune the learning rate. It con- trols the effective capacity of the model in a more complicated way than other hyperparameters—the effective capacity of the model is highest when the learning rate is correct for the optimization problem, not when the learning rate is especially large or especially small. The learning rate has a U-shaped curve for training error, illustrated in figure . When the learning rate is too large, gradient descent 11.1 can inadvertently increase rather than decrease the training error. In the idealized quadratic case, this occurs if the learning rate is at least twice as large as its optimal value ( , ). When the learning rate is too small, training LeCun et al. 1998a is not only slower, but may become permanently stuck with a high training error. This effect is poorly understood (it would not happen for a convex loss function). Tuning the parameters other than the learning rate requires monitoring both training and test error to diagnose whether your model is overfitting or underfitting, then adjusting its capacity appropriately. If your error on the training set is higher than your target error rate, you have no choice but to increase capacity. If you are not using regularization and you are confident that your optimization algorithm is performing correctly, then you must add more layers to your network or add more hidden units. Unfortunately, this increases the computational costs associated with the model. If your error on the test set is higher than than your target error rate, you can 429
  • 446. CHAPTER 11. PRACTICAL METHODOLOGY 10−2 10−1 100 Learning rate (logarithmic scale) 0 1 2 3 4 5 6 7 8 Training error Figure 11.1: Typical relationship between the learning rate and the training error. Notice the sharp rise in error when the learning is above an optimal value. This is for a fixed training time, as a smaller learning rate may sometimes only slow down training by a factor proportional to the learning rate reduction. Generalization error can follow this curve or be complicated by regularization effects arising out of having a too large or too small learning rates, since poor optimization can, to some degree, reduce or prevent overfitting, and even points with equivalent training error can have different generalization error. now take two kinds of actions. The test error is the sum of the training error and the gap between training and test error. The optimal test error is found by trading off these quantities. Neural networks typically perform best when the training error is very low (and thus, when capacity is high) and the test error is primarily driven by the gap between train and test error. Your goal is to reduce this gap without increasing training error faster than the gap decreases. To reduce the gap, change regularization hyperparameters to reduce effective model capacity, such as by adding dropout or weight decay. Usually the best performance comes from a large model that is regularized well, for example by using dropout. Most hyperparameters can be set by reasoning about whether they increase or decrease model capacity. Some examples are included in Table . 11.1 While manually tuning hyperparameters, do not lose sight of your end goal: good performance on the test set. Adding regularization is only one way to achieve this goal. As long as you have low training error, you can always reduce general- ization error by collecting more training data. The brute force way to practically guarantee success is to continually increase model capacity and training set size until the task is solved. This approach does of course increase the computational cost of training and inference, so it is only feasible given appropriate resources. In 430
  • 447. CHAPTER 11. PRACTICAL METHODOLOGY Hyperparameter Increases capacity when. . . Reason Caveats Number of hid- den units increased Increasing the number of hidden units increases the representational capacity of the model. Increasing the number of hidden units increases both the time and memory cost of essentially every op- eration on the model. Learning rate tuned op- timally An improper learning rate, whether too high or too low, results in a model with low effective capacity due to optimization failure Convolution ker- nel width increased Increasing the kernel width increases the number of pa- rameters in the model A wider kernel results in a narrower output dimen- sion, reducing model ca- pacity unless you use im- plicit zero padding to re- duce this effect. Wider kernels require more mem- ory for parameter storage and increase runtime, but a narrower output reduces memory cost. Implicit zero padding increased Adding implicit zeros be- fore convolution keeps the representation size large Increased time and mem- ory cost of most opera- tions. Weight decay co- efficient decreased Decreasing the weight de- cay coefficient frees the model parameters to be- come larger Dropout rate decreased Dropping units less often gives the units more oppor- tunities to “conspire” with each other to fit the train- ing set Table 11.1: The effect of various hyperparameters on model capacity. 431
  • 448. CHAPTER 11. PRACTICAL METHODOLOGY principle, this approach could fail due to optimization difficulties, but for many problems optimization does not seem to be a significant barrier, provided that the model is chosen appropriately. 11.4.2 Automatic Hyperparameter Optimization Algorithms The ideal learning algorithm just takes a dataset and outputs a function, without requiring hand-tuning of hyperparameters. The popularity of several learning algorithms such as logistic regression and SVMs stems in part from their ability to perform well with only one or two tuned hyperparameters. Neural networks can sometimes perform well with only a small number of tuned hyperparameters, but often benefit significantly from tuning of forty or more hyperparameters. Manual hyperparameter tuning can work very well when the user has a good starting point, such as one determined by others having worked on the same type of application and architecture, or when the user has months or years of experience in exploring hyperparameter values for neural networks applied to similar tasks. However, for many applications, these starting points are not available. In these cases, automated algorithms can find useful values of the hyperparameters. If we think about the way in which the user of a learning algorithm searches for good values of the hyperparameters, we realize that an optimization is taking place: we are trying to find a value of the hyperparameters that optimizes an objective function, such as validation error, sometimes under constraints (such as a budget for training time, memory or recognition time). It is therefore possible, in principle, to develop hyperparameter optimization algorithms that wrap a learning algorithm and choose its hyperparameters, thus hiding the hyperparameters of the learning algorithm from the user. Unfortunately, hyperparameter optimization algorithms often have their own hyperparameters, such as the range of values that should be explored for each of the learning algorithm’s hyperparameters. However, these secondary hyperparameters are usually easier to choose, in the sense that acceptable performance may be achieved on a wide range of tasks using the same secondary hyperparameters for all tasks. 11.4.3 Grid Search When there are three or fewer hyperparameters, the common practice is to perform grid search. For each hyperparameter, the user selects a small finite set of values to explore. The grid search algorithm then trains a model for every joint specification of hyperparameter values in the Cartesian product of the set of values for each individual hyperparameter. The experiment that yields the best validation 432
  • 449. CHAPTER 11. PRACTICAL METHODOLOGY Grid Random Figure 11.2: Comparison of grid search and random search. For illustration purposes we display two hyperparameters but we are typically interested in having many more. (Left)To perform grid search, we provide a set of values for each hyperparameter. The search algorithm runs training for every joint hyperparameter setting in the cross product of these sets. To perform random search, we provide a probability distribution over joint (Right) hyperparameter configurations. Usually most of these hyperparameters are independent from each other. Common choices for the distribution over a single hyperparameter include uniform and log-uniform (to sample from a log-uniform distribution, take theexp of a sample from a uniform distribution). The search algorithm then randomly samples joint hyperparameter configurations and runs training with each of them. Both grid search and random search evaluate the validation set error and return the best configuration. The figure illustrates the typical case where only some hyperparameters have a significant influence on the result. In this illustration, only the hyperparameter on the horizontal axis has a significant effect. Grid search wastes an amount of computation that is exponential in the number of non-influential hyperparameters, while random search tests a unique value of every influential hyperparameter on nearly every trial. Figure reproduced with permission from ( ). Bergstra and Bengio 2012 433
  • 450. CHAPTER 11. PRACTICAL METHODOLOGY set error is then chosen as having found the best hyperparameters. See the left of figure for an illustration of a grid of hyperparameter values. 11.2 How should the lists of values to search over be chosen? In the case of numerical (ordered) hyperparameters, the smallest and largest element of each list is chosen conservatively, based on prior experience with similar experiments, to make sure that the optimal value is very likely to be in the selected range. Typically, a grid search involves picking values approximately on a logarithmic scale, e.g., a learning rate taken within the set {.1, .01, 10−3 ,10−4 ,10−5}, or a number of hidden units taken with the set . { } 50 100 200 500 1000 2000 , , , , , Grid search usually performs best when it is performed repeatedly. For example, suppose that we ran a grid search over a hyperparameter α using values of {−1, 0,1}. If the best value found is , then we underestimated the range in which the best 1 α lies and we should shift the grid and run another search with α in, for example, {1, 2, 3}. If we find that the best value of α is , then we may wish to refine our 0 estimate by zooming in and running a grid search over . {− } . , , . 1 0 1 The obvious problem with grid search is that its computational cost grows exponentially with the number of hyperparameters. If there are m hyperparameters, each taking at most n values, then the number of training and evaluation trials required grows as O(nm). The trials may be run in parallel and exploit loose parallelism (with almost no need for communication between different machines carrying out the search) Unfortunately, due to the exponential cost of grid search, even parallelization may not provide a satisfactory size of search. 11.4.4 Random Search Fortunately, there is an alternative to grid search that is as simple to program, more convenient to use, and converges much faster to good values of the hyperparameters: random search ( , ). Bergstra and Bengio 2012 A random search proceeds as follows. First we define a marginal distribution for each hyperparameter, e.g., a Bernoulli or multinoulli for binary or discrete hyperparameters, or a uniform distribution on a log-scale for positive real-valued hyperparameters. For example, log learning rate _ _ ∼ − − u( 1, 5) (11.2) learning rate _ = 10log learning rate _ _ . (11.3) where u(a, b) indicates a sample of the uniform distribution in the interval (a, b). Similarly the log number of hidden units _ _ _ _ may be sampled from u(log(50), log(2000)). 434
  • 451. CHAPTER 11. PRACTICAL METHODOLOGY Unlike in the case of a grid search, one should not discretize or bin the values of the hyperparameters. This allows one to explore a larger set of values, and does not incur additional computational cost. In fact, as illustrated in figure , a 11.2 random search can be exponentially more efficient than a grid search, when there are several hyperparameters that do not strongly affect the performance measure. This is studied at length in ( ), who found that random Bergstra and Bengio 2012 search reduces the validation set error much faster than grid search, in terms of the number of trials run by each method. As with grid search, one may often want to run repeated versions of random search, to refine the search based on the results of the first run. The main reason why random search finds good solutions faster than grid search is that there are no wasted experimental runs, unlike in the case of grid search, when two values of a hyperparameter (given values of the other hyperparameters) would give the same result. In the case of grid search, the other hyperparameters would have the same values for these two runs, whereas with random search, they would usually have different values. Hence if the change between these two values does not marginally make much difference in terms of validation set error, grid search will unnecessarily repeat two equivalent experiments while random search will still give two independent explorations of the other hyperparameters. 11.4.5 Model-Based Hyperparameter Optimization The search for good hyperparameters can be cast as an optimization problem. The decision variables are the hyperparameters. The cost to be optimized is the validation set error that results from training using these hyperparameters. In simplified settings where it is feasible to compute the gradient of some differentiable error measure on the validation set with respect to the hyperparameters, we can simply follow this gradient ( , ; , ; , Bengio et al. 1999 Bengio 2000 Maclaurin et al. 2015). Unfortunately, in most practical settings, this gradient is unavailable, either due to its high computation and memory cost, or due to hyperparameters having intrinsically non-differentiable interactions with the validation set error, as in the case of discrete-valued hyperparameters. To compensate for this lack of a gradient, we can build a model of the validation set error, then propose new hyperparameter guesses by performing optimization within this model. Most model-based algorithms for hyperparameter search use a Bayesian regression model to estimate both the expected value of the validation set error for each hyperparameter and the uncertainty around this expectation. Opti- mization thus involves a tradeoff between exploration (proposing hyperparameters 435
  • 452. CHAPTER 11. PRACTICAL METHODOLOGY for which there is high uncertainty, which may lead to a large improvement but may also perform poorly) and exploitation (proposing hyperparameters which the model is confident will perform as well as any hyperparameters it has seen so far—usually hyperparameters that are very similar to ones it has seen before). Contemporary approaches to hyperparameter optimization include Spearmint ( , ), Snoek et al. 2012 TPE ( , ) and SMAC ( , ). Bergstra et al. 2011 Hutter et al. 2011 Currently, we cannot unambiguously recommend Bayesian hyperparameter optimization as an established tool for achieving better deep learning results or for obtaining those results with less effort. Bayesian hyperparameter optimization sometimes performs comparably to human experts, sometimes better, but fails catastrophically on other problems. It may be worth trying to see if it works on a particular problem but is not yet sufficiently mature or reliable. That being said, hyperparameter optimization is an important field of research that, while often driven primarily by the needs of deep learning, holds the potential to benefit not only the entire field of machine learning but the discipline of engineering in general. One drawback common to most hyperparameter optimization algorithms with more sophistication than random search is that they require for a training ex- periment to run to completion before they are able to extract any information from the experiment. This is much less efficient, in the sense of how much infor- mation can be gleaned early in an experiment, than manual search by a human practitioner, since one can usually tell early on if some set of hyperparameters is completely pathological. ( ) have introduced an early version Swersky et al. 2014 of an algorithm that maintains a set of multiple experiments. At various time points, the hyperparameter optimization algorithm can choose to begin a new experiment, to “freeze” a running experiment that is not promising, or to “thaw” and resume an experiment that was earlier frozen but now appears promising given more information. 11.5 Debugging Strategies When a machine learning system performs poorly, it is usually difficult to tell whether the poor performance is intrinsic to the algorithm itself or whether there is a bug in the implementation of the algorithm. Machine learning systems are difficult to debug for a variety of reasons. In most cases, we do not know a priori what the intended behavior of the algorithm is. In fact, the entire point of using machine learning is that it will discover useful behavior that we were not able to specify ourselves. If we train a 436
  • 453. CHAPTER 11. PRACTICAL METHODOLOGY neural network on a classification task and it achieves 5% test error, we have new no straightforward way of knowing if this is the expected behavior or sub-optimal behavior. A further difficulty is that most machine learning models have multiple parts that are each adaptive. If one part is broken, the other parts can adapt and still achieve roughly acceptable performance. For example, suppose that we are training a neural net with several layers parametrized by weights W and biases b. Suppose further that we have manually implemented the gradient descent rule for each parameter separately, and we made an error in the update for the biases: b b ← − α (11.4) where α is the learning rate. This erroneous update does not use the gradient at all. It causes the biases to constantly become negative throughout learning, which is clearly not a correct implementation of any reasonable learning algorithm. The bug may not be apparent just from examining the output of the model though. Depending on the distribution of the input, the weights may be able to adapt to compensate for the negative biases. Most debugging strategies for neural nets are designed to get around one or both of these two difficulties. Either we design a case that is so simple that the correct behavior actually can be predicted, or we design a test that exercises one part of the neural net implementation in isolation. Some important debugging tests include: Visualize the model in action : When training a model to detect objects in images, view some images with the detections proposed by the model displayed superimposed on the image. When training a generative model of speech, listen to some of the speech samples it produces. This may seem obvious, but it is easy to fall into the practice of only looking at quantitative performance measurements like accuracy or log-likelihood. Directly observing the machine learning model performing its task will help to determine whether the quantitative performance numbers it achieves seem reasonable. Evaluation bugs can be some of the most devastating bugs because they can mislead you into believing your system is performing well when it is not. Visualize the worst mistakes : Most models are able to output some sort of confidence measure for the task they perform. For example, classifiers based on a softmax output layer assign a probability to each class. The probability assigned to the most likely class thus gives an estimate of the confidence the model has in its classification decision. Typically, maximum likelihood training results in these values being overestimates rather than accurate probabilities of correct prediction, 437
  • 454. CHAPTER 11. PRACTICAL METHODOLOGY but they are somewhat useful in the sense that examples that are actually less likely to be correctly labeled receive smaller probabilities under the model. By viewing the training set examples that are the hardest to model correctly, one can often discover problems with the way the data has been preprocessed or labeled. For example, the Street View transcription system originally had a problem where the address number detection system would crop the image too tightly and omit some of the digits. The transcription network then assigned very low probability to the correct answer on these images. Sorting the images to identify the most confident mistakes showed that there was a systematic problem with the cropping. Modifying the detection system to crop much wider images resulted in much better performance of the overall system, even though the transcription network needed to be able to process greater variation in the position and scale of the address numbers. Reasoning about software using train and test error: It is often difficult to determine whether the underlying software is correctly implemented. Some clues can be obtained from the train and test error. If training error is low but test error is high, then it is likely that that the training procedure works correctly, and the model is overfitting for fundamental algorithmic reasons. An alternative possibility is that the test error is measured incorrectly due to a problem with saving the model after training then reloading it for test set evaluation, or if the test data was prepared differently from the training data. If both train and test error are high, then it is difficult to determine whether there is a software defect or whether the model is underfitting due to fundamental algorithmic reasons. This scenario requires further tests, described next. Fit a tiny dataset: If you have high error on the training set, determine whether it is due to genuine underfitting or due to a software defect. Usually even small models can be guaranteed to be able fit a sufficiently small dataset. For example, a classification dataset with only one example can be fit just by setting the biases of the output layer correctly. Usually if you cannot train a classifier to correctly label a single example, an autoencoder to successfully reproduce a single example with high fidelity, or a generative model to consistently emit samples resembling a single example, there is a software defect preventing successful optimization on the training set. This test can be extended to a small dataset with few examples. Compare back-propagated derivatives to numerical derivatives: If you are using a software framework that requires you to implement your own gradient com- putations, or if you are adding a new operation to a differentiation library and must define its bprop method, then a common source of error is implementing this gradient expression incorrectly. One way to verify that these derivatives are correct 438
  • 455. CHAPTER 11. PRACTICAL METHODOLOGY is to compare the derivatives computed by your implementation of automatic differentiation to the derivatives computed by a . Because finite differences f ( ) = lim x →0 f x  f x ( + ) − ( )  , (11.5) we can approximate the derivative by using a small, finite :  f ( ) x ≈ f x  f x ( + ) − ( )  . (11.6) We can improve the accuracy of the approximation by using the centered differ- ence: f ( ) x ≈ f x ( + 1 2  f x ) − ( − 1 2 )  . (11.7) The perturbation size  must chosen to be large enough to ensure that the pertur- bation is not rounded down too much by finite-precision numerical computations. Usually, we will want to test the gradient or Jacobian of a vector-valued function g : Rm → Rn . Unfortunately, finite differencing only allows us to take a single derivative at a time. We can either run finite differencing mn times to evaluate all of the partial derivatives of g, or we can apply the test to a new function that uses random projections at both the input and output of g. For example, we can apply our test of the implementation of the derivatives to f(x) where f(x) = uT g(vx), where u and v are randomly chosen vectors. Computing f(x) correctly requires being able to back-propagate through g correctly, yet is efficient to do with finite differences because f has only a single input and a single output. It is usually a good idea to repeat this test for more than one value of u and v to reduce the chance that the test overlooks mistakes that are orthogonal to the random projection. If one has access to numerical computation on complex numbers, then there is a very efficient way to numerically estimate the gradient by using complex numbers as input to the function (Squire and Trapp 1998 , ). The method is based on the observation that f x i f x if ( + ) = ( ) +  ( ) + ( x O 2 ) (11.8) real( ( + )) = ( ) + ( f x i f x O 2 ) imag( , f x i ( + )  ) = f ( ) + ( x O 2 ), (11.9) where i = √ −1. Unlike in the real-valued case above, there is no cancellation effect due to taking the difference between the value of f at different points. This allows the use of tiny values of  like  = 10−150 , which make the O(2 ) error insignificant for all practical purposes. 439
  • 456. CHAPTER 11. PRACTICAL METHODOLOGY Monitor histograms of activations and gradient: It is often useful to visualize statistics of neural network activations and gradients, collected over a large amount of training iterations (maybe one epoch). The pre-activation value of hidden units can tell us if the units saturate, or how often they do. For example, for rectifiers, how often are they off? Are there units that are always off? For tanh units, the average of the absolute value of the pre-activations tells us how saturated the unit is. In a deep network where the propagated gradients quickly grow or quickly vanish, optimization may be hampered. Finally, it is useful to compare the magnitude of parameter gradients to the magnitude of the parameters themselves. As suggested by ( ), we would like the magnitude of parameter updates Bottou 2015 over a minibatch to represent something like 1% of the magnitude of the parameter, not 50% or 0.001% (which would make the parameters move too slowly). It may be that some groups of parameters are moving at a good pace while others are stalled. When the data is sparse (like in natural language), some parameters may be very rarely updated, and this should be kept in mind when monitoring their evolution. Finally, many deep learning algorithms provide some sort of guarantee about the results produced at each step. For example, in part , we will see some approx- III imate inference algorithms that work by using algebraic solutions to optimization problems. Typically these can be debugged by testing each of their guarantees. Some guarantees that some optimization algorithms offer include that the objective function will never increase after one step of the algorithm, that the gradient with respect to some subset of variables will be zero after each step of the algorithm, and that the gradient with respect to all variables will be zero at convergence. Usually due to rounding error, these conditions will not hold exactly in a digital computer, so the debugging test should include some tolerance parameter. 11.6 Example: Multi-Digit Number Recognition To provide an end-to-end description of how to apply our design methodology in practice, we present a brief account of the Street View transcription system, from the point of view of designing the deep learning components. Obviously, many other components of the complete system, such as the Street View cars, the database infrastructure, and so on, were of paramount importance. From the point of view of the machine learning task, the process began with data collection. The cars collected the raw data and human operators provided labels. The transcription task was preceded by a significant amount of dataset curation, including using other machine learning techniques to detect the house 440
  • 457. CHAPTER 11. PRACTICAL METHODOLOGY numbers prior to transcribing them. The transcription project began with a choice of performance metrics and desired values for these metrics. An important general principle is to tailor the choice of metric to the business goals for the project. Because maps are only useful if they have high accuracy, it was important to set a high accuracy requirement for this project. Specifically, the goal was to obtain human-level, 98% accuracy. This level of accuracy may not always be feasible to obtain. In order to reach this level of accuracy, the Street View transcription system sacrifices coverage. Coverage thus became the main performance metric optimized during the project, with accuracy held at 98%. As the convolutional network improved, it became possible to reduce the confidence threshold below which the network refuses to transcribe the input, eventually exceeding the goal of 95% coverage. After choosing quantitative goals, the next step in our recommended methodol- ogy is to rapidly establish a sensible baseline system. For vision tasks, this means a convolutional network with rectified linear units. The transcription project began with such a model. At the time, it was not common for a convolutional network to output a sequence of predictions. In order to begin with the simplest possible baseline, the first implementation of the output layer of the model consisted of n different softmax units to predict a sequence of n characters. These softmax units were trained exactly the same as if the task were classification, with each softmax unit trained independently. Our recommended methodology is to iteratively refine the baseline and test whether each change makes an improvement. The first change to the Street View transcription system was motivated by a theoretical understanding of the coverage metric and the structure of the data. Specifically, the network refuses to classify an input x whenever the probability of the output sequence p(y x | ) < t for some threshold t. Initially, the definition of p(y x | ) was ad-hoc, based on simply multiplying all of the softmax outputs together. This motivated the development of a specialized output layer and cost function that actually computed a principled log-likelihood. This approach allowed the example rejection mechanism to function much more effectively. At this point, coverage was still below 90%, yet there were no obvious theoretical problems with the approach. Our methodology therefore suggests to instrument the train and test set performance in order to determine whether the problem is underfitting or overfitting. In this case, train and test set error were nearly identical. Indeed, the main reason this project proceeded so smoothly was the availability of a dataset with tens of millions of labeled examples. Because train and test set error were so similar, this suggested that the problem was either due 441
  • 458. CHAPTER 11. PRACTICAL METHODOLOGY to underfitting or due to a problem with the training data. One of the debugging strategies we recommend is to visualize the model’s worst errors. In this case, that meant visualizing the incorrect training set transcriptions that the model gave the highest confidence. These proved to mostly consist of examples where the input image had been cropped too tightly, with some of the digits of the address being removed by the cropping operation. For example, a photo of an address “1849” might be cropped too tightly, with only the “849” remaining visible. This problem could have been resolved by spending weeks improving the accuracy of the address number detection system responsible for determining the cropping regions. Instead, the team took a much more practical decision, to simply expand the width of the crop region to be systematically wider than the address number detection system predicted. This single change added ten percentage points to the transcription system’s coverage. Finally, the last few percentage points of performance came from adjusting hyperparameters. This mostly consisted of making the model larger while main- taining some restrictions on its computational cost. Because train and test error remained roughly equal, it was always clear that any performance deficits were due to underfitting, as well as due to a few remaining problems with the dataset itself. Overall, the transcription project was a great success, and allowed hundreds of millions of addresses to be transcribed both faster and at lower cost than would have been possible via human effort. We hope that the design principles described in this chapter will lead to many other similar successes. 442
  • 459. Chapter 12 Applications In this chapter, we describe how to use deep learning to solve applications in com- puter vision, speech recognition, natural language processing, and other application areas of commercial interest. We begin by discussing the large scale neural network implementations required for most serious AI applications. Next, we review several specific application areas that deep learning has been used to solve. While one goal of deep learning is to design algorithms that are capable of solving a broad variety of tasks, so far some degree of specialization is needed. For example, vision tasks require processing a large number of input features (pixels) per example. Language tasks require modeling a large number of possible values (words in the vocabulary) per input feature. 12.1 Large-Scale Deep Learning Deep learning is based on the philosophy of connectionism: while an individual biological neuron or an individual feature in a machine learning model is not intelligent, a large population of these neurons or features acting together can exhibit intelligent behavior. It truly is important to emphasize the fact that the number of neurons must be large. One of the key factors responsible for the improvement in neural network’s accuracy and the improvement of the complexity of tasks they can solve between the 1980s and today is the dramatic increase in the size of the networks we use. As we saw in section , network sizes have 1.2.3 grown exponentially for the past three decades, yet artificial neural networks are only as large as the nervous systems of insects. Because the size of neural networks is of paramount importance, deep learning 443
  • 460. CHAPTER 12. APPLICATIONS requires high performance hardware and software infrastructure. 12.1.1 Fast CPU Implementations Traditionally, neural networks were trained using the CPU of a single machine. Today, this approach is generally considered insufficient. We now mostly use GPU computing or the CPUs of many machines networked together. Before moving to these expensive setups, researchers worked hard to demonstrate that CPUs could not manage the high computational workload required by neural networks. A description of how to implement efficient numerical CPU code is beyond the scope of this book, but we emphasize here that careful implementation for specific CPU families can yield large improvements. For example, in 2011, the best CPUs available could run neural network workloads faster when using fixed-point arithmetic rather than floating-point arithmetic. By creating a carefully tuned fixed- point implementation, Vanhoucke 2011 et al. ( ) obtained a threefold speedup over a strong floating-point system. Each new model of CPU has different performance characteristics, so sometimes floating-point implementations can be faster too. The important principle is that careful specialization of numerical computation routines can yield a large payoff. Other strategies, besides choosing whether to use fixed or floating point, include optimizing data structures to avoid cache misses and using vector instructions. Many machine learning researchers neglect these implementation details, but when the performance of an implementation restricts the size of the model, the accuracy of the model suffers. 12.1.2 GPU Implementations Most modern neural network implementations are based on graphics processing units. Graphics processing units (GPUs) are specialized hardware components that were originally developed for graphics applications. The consumer market for video gaming systems spurred development of graphics processing hardware. The performance characteristics needed for good video gaming systems turn out to be beneficial for neural networks as well. Video game rendering requires performing many operations in parallel quickly. Models of characters and environments are specified in terms of lists of 3-D coordinates of vertices. Graphics cards must perform matrix multiplication and division on many vertices in parallel to convert these 3-D coordinates into 2-D on-screen coordinates. The graphics card must then perform many computations at each pixel in parallel to determine the color of each pixel. In both cases, the 444
  • 461. CHAPTER 12. APPLICATIONS computations are fairly simple and do not involve much branching compared to the computational workload that a CPU usually encounters. For example, each vertex in the same rigid object will be multiplied by the same matrix; there is no need to evaluate an if statement per-vertex to determine which matrix to multiply by. The computations are also entirely independent of each other, and thus may be parallelized easily. The computations also involve processing massive buffers of memory, containing bitmaps describing the texture (color pattern) of each object to be rendered. Together, this results in graphics cards having been designed to have a high degree of parallelism and high memory bandwidth, at the cost of having a lower clock speed and less branching capability relative to traditional CPUs. Neural network algorithms require the same performance characteristics as the real-time graphics algorithms described above. Neural networks usually involve large and numerous buffers of parameters, activation values, and gradient values, each of which must be completely updated during every step of training. These buffers are large enough to fall outside the cache of a traditional desktop computer so the memory bandwidth of the system often becomes the rate limiting factor. GPUs offer a compelling advantage over CPUs due to their high memory bandwidth. Neural network training algorithms typically do not involve much branching or sophisticated control, so they are appropriate for GPU hardware. Since neural networks can be divided into multiple individual “neurons” that can be processed independently from the other neurons in the same layer, neural networks easily benefit from the parallelism of GPU computing. GPU hardware was originally so specialized that it could only be used for graphics tasks. Over time, GPU hardware became more flexible, allowing custom subroutines to be used to transform the coordinates of vertices or assign colors to pixels. In principle, there was no requirement that these pixel values actually be based on a rendering task. These GPUs could be used for scientific computing by writing the output of a computation to a buffer of pixel values. Steinkrau et al. ( ) implemented a two-layer fully connected neural network on a GPU and 2005 reported a threefold speedup over their CPU-based baseline. Shortly thereafter, Chellapilla 2006 et al. ( ) demonstrated that the same technique could be used to accelerate supervised convolutional networks. The popularity of graphics cards for neural network training exploded after the advent of general purpose GPUs. These GP-GPUs could execute arbitrary code, not just rendering subroutines. NVIDIA’s CUDA programming language provided a way to write this arbitrary code in a C-like language. With their relatively convenient programming model, massive parallelism, and high memory 445
  • 462. CHAPTER 12. APPLICATIONS bandwidth, GP-GPUs now offer an ideal platform for neural network programming. This platform was rapidly adopted by deep learning researchers soon after it became available ( , ; , ). Raina et al. 2009 Ciresan et al. 2010 Writing efficient code for GP-GPUs remains a difficult task best left to spe- cialists. The techniques required to obtain good performance on GPU are very different from those used on CPU. For example, good CPU-based code is usually designed to read information from the cache as much as possible. On GPU, most writable memory locations are not cached, so it can actually be faster to compute the same value twice, rather than compute it once and read it back from memory. GPU code is also inherently multi-threaded and the different threads must be coordinated with each other carefully. For example, memory operations are faster if they can be coalesced. Coalesced reads or writes occur when several threads can each read or write a value that they need simultaneously, as part of a single memory transaction. Different models of GPUs are able to coalesce different kinds of read or write patterns. Typically, memory operations are easier to coalesce if among n threads, thread i accesses byte i + j of memory, and j is a multiple of some power of 2. The exact specifications differ between models of GPU. Another common consideration for GPUs is making sure that each thread in a group executes the same instruction simultaneously. This means that branching can be difficult on GPU. Threads are divided into small groups called warps. Each thread in a warp executes the same instruction during each cycle, so if different threads within the same warp need to execute different code paths, these different code paths must be traversed sequentially rather than in parallel. Due to the difficulty of writing high performance GPU code, researchers should structure their workflow to avoid needing to write new GPU code in order to test new models or algorithms. Typically, one can do this by building a software library of high performance operations like convolution and matrix multiplication, then specifying models in terms of calls to this library of operations. For example, the machine learning library Pylearn2 (Goodfellow 2013c et al., ) specifies all of its machine learning algorithms in terms of calls to Theano ( , ; Bergstra et al. 2010 Bastien 2012 et al., ) and cuda-convnet ( , ), which provide these Krizhevsky 2010 high-performance operations. This factored approach can also ease support for multiple kinds of hardware. For example, the same Theano program can run on either CPU or GPU, without needing to change any of the calls to Theano itself. Other libraries like TensorFlow ( , ) and Torch ( , Abadi et al. 2015 Collobert et al. 2011b) provide similar features. 446
  • 463. CHAPTER 12. APPLICATIONS 12.1.3 Large-Scale Distributed Implementations In many cases, the computational resources available on a single machine are insufficient. We therefore want to distribute the workload of training and inference across many machines. Distributing inference is simple, because each input example we want to process can be run by a separate machine. This is known as . data parallelism It is also possible to get model parallelism, where multiple machines work together on a single datapoint, with each machine running a different part of the model. This is feasible for both inference and training. Data parallelism during training is somewhat harder. We can increase the size of the minibatch used for a single SGD step, but usually we get less than linear returns in terms of optimization performance. It would be better to allow multiple machines to compute multiple gradient descent steps in parallel. Unfortunately, the standard definition of gradient descent is as a completely sequential algorithm: the gradient at step is a function of the parameters produced by step . t t − 1 This can be solved using asynchronous stochastic gradient descent (Ben- gio 2001 Recht 2011 et al., ; et al., ). In this approach, several processor cores share the memory representing the parameters. Each core reads parameters without a lock, then computes a gradient, then increments the parameters without a lock. This reduces the average amount of improvement that each gradient descent step yields, because some of the cores overwrite each other’s progress, but the increased rate of production of steps causes the learning process to be faster overall. Dean et al. ( ) pioneered the multi-machine implementation of this lock-free approach 2012 to gradient descent, where the parameters are managed by a parameter server rather than stored in shared memory. Distributed asynchronous gradient descent remains the primary strategy for training large deep networks and is used by most major deep learning groups in industry ( , ; Chilimbi et al. 2014 Wu et al., 2015). Academic deep learning researchers typically cannot afford the same scale of distributed learning systems but some research has focused on how to build distributed networks with relatively low-cost hardware available in the university setting ( , ). Coates et al. 2013 12.1.4 Model Compression In many commercial applications, it is much more important that the time and memory cost of running inference in a machine learning model be low than that the time and memory cost of training be low. For applications that do not require 447
  • 464. CHAPTER 12. APPLICATIONS personalization, it is possible to train a model once, then deploy it to be used by billions of users. In many cases, the end user is more resource-constrained than the developer. For example, one might train a speech recognition network with a powerful computer cluster, then deploy it on mobile phones. A key strategy for reducing the cost of inference is model compression (Bu- ciluǎ 2006 et al., ). The basic idea of model compression is to replace the original, expensive model with a smaller model that requires less memory and runtime to store and evaluate. Model compression is applicable when the size of the original model is driven primarily by a need to prevent overfitting. In most cases, the model with the lowest generalization error is an ensemble of several independently trained models. Evaluating all n ensemble members is expensive. Sometimes, even a single model generalizes better if it is large (for example, if it is regularized with dropout). These large models learn some function f(x), but do so using many more parameters than are necessary for the task. Their size is necessary only due to the limited number of training examples. As soon as we have fit this function f(x), we can generate a training set containing infinitely many examples, simply by applying f to randomly sampled points x. We then train the new, smaller, model to match f(x) on these points. In order to most efficiently use the capacity of the new, small model, it is best to sample the new x points from a distribution resembling the actual test inputs that will be supplied to the model later. This can be done by corrupting training examples or by drawing points from a generative model trained on the original training set. Alternatively, one can train the smaller model only on the original training points, but train it to copy other features of the model, such as its posterior distribution over the incorrect classes (Hinton 2014 2015 et al., , ). 12.1.5 Dynamic Structure One strategy for accelerating data processing systems in general is to build systems that have dynamic structure in the graph describing the computation needed to process an input. Data processing systems can dynamically determine which subset of many neural networks should be run on a given input. Individual neural networks can also exhibit dynamic structure internally by determining which subset of features (hidden units) to compute given information from the input. This form of dynamic structure inside neural networks is sometimes called conditional computation ( , ; , ). Since many components of Bengio 2013 Bengio et al. 2013b the architecture may be relevant only for a small amount of possible inputs, the 448
  • 465. CHAPTER 12. APPLICATIONS system can run faster by computing these features only when they are needed. Dynamic structure of computations is a basic computer science principle applied generally throughout the software engineering discipline. The simplest versions of dynamic structure applied to neural networks are based on determining which subset of some group of neural networks (or other machine learning models) should be applied to a particular input. A venerable strategy for accelerating inference in a classifier is to use a cascade of classifiers. The cascade strategy may be applied when the goal is to detect the presence of a rare object (or event). To know for sure that the object is present, we must use a sophisticated classifier with high capacity, that is expensive to run. However, because the object is rare, we can usually use much less computation to reject inputs as not containing the object. In these situations, we can train a sequence of classifiers. The first classifiers in the sequence have low capacity, and are trained to have high recall. In other words, they are trained to make sure we do not wrongly reject an input when the object is present. The final classifier is trained to have high precision. At test time, we run inference by running the classifiers in a sequence, abandoning any example as soon as any one element in the cascade rejects it. Overall, this allows us to verify the presence of objects with high confidence, using a high capacity model, but does not force us to pay the cost of full inference for every example. There are two different ways that the cascade can achieve high capacity. One way is to make the later members of the cascade individually have high capacity. In this case, the system as a whole obviously has high capacity, because some of its individual members do. It is also possible to make a cascade in which every individual model has low capacity but the system as a whole has high capacity due to the combination of many small models. Viola and Jones 2001 ( ) used a cascade of boosted decision trees to implement a fast and robust face detector suitable for use in handheld digital cameras. Their classifier localizes a face using essentially a sliding window approach in which many windows are examined and rejected if they do not contain faces. Another version of cascades uses the earlier models to implement a sort of hard attention mechanism: the early members of the cascade localize an object and later members of the cascade perform further processing given the location of the object. For example, Google transcribes address numbers from Street View imagery using a two-step cascade that first locates the address number with one machine learning model and then transcribes it with another (Goodfellow 2014d et al., ). Decision trees themselves are an example of dynamic structure, because each node in the tree determines which of its subtrees should be evaluated for each input. A simple way to accomplish the union of deep learning and dynamic structure 449
  • 466. CHAPTER 12. APPLICATIONS is to train a decision tree in which each node uses a neural network to make the splitting decision ( , ), though this has typically not been Guo and Gelfand 1992 done with the primary goal of accelerating inference computations. In the same spirit, one can use a neural network, called the gater to select which one out of several expert networks will be used to compute the output, given the current input. The first version of this idea is called the mixture of experts (Nowlan 1990 Jacobs 1991 , ; et al., ), in which the gater outputs a set of probabilities or weights (obtained via a softmax nonlinearity), one per expert, and the final output is obtained by the weighted combination of the output of the experts. In that case, the use of the gater does not offer a reduction in computational cost, but if a single expert is chosen by the gater for each example, we obtain the hard mixture of experts ( , , ), which Collobert et al. 2001 2002 can considerably accelerate training and inference time. This strategy works well when the number of gating decisions is small because it is not combinatorial. But when we want to select different subsets of units or parameters, it is not possible to use a “soft switch” because it requires enumerating (and computing outputs for) all the gater configurations. To deal with this problem, several approaches have been explored to train combinatorial gaters. ( ) experiment with Bengio et al. 2013b several estimators of the gradient on the gating probabilities, while Bacon et al. ( ) and ( ) use reinforcement learning techniques (policy 2015 Bengio et al. 2015a gradient) to learn a form of conditional dropout on blocks of hidden units and get an actual reduction in computational cost without impacting negatively on the quality of the approximation. Another kind of dynamic structure is a switch, where a hidden unit can receive input from different units depending on the context. This dynamic routing approach can be interpreted as an attention mechanism ( , ). Olshausen et al. 1993 So far, the use of a hard switch has not proven effective on large-scale applications. Contemporary approaches instead use a weighted average over many possible inputs, and thus do not achieve all of the possible computational benefits of dynamic structure. Contemporary attention mechanisms are described in section . 12.4.5.1 One major obstacle to using dynamically structured systems is the decreased degree of parallelism that results from the system following different code branches for different inputs. This means that few operations in the network can be described as matrix multiplication or batch convolution on a minibatch of examples. We can write more specialized sub-routines that convolve each example with different kernels or multiply each row of a design matrix by a different set of columns of weights. Unfortunately, these more specialized subroutines are difficult to implement efficiently. CPU implementations will be slow due to the lack of cache 450
  • 467. CHAPTER 12. APPLICATIONS coherence and GPU implementations will be slow due to the lack of coalesced memory transactions and the need to serialize warps when members of a warp take different branches. In some cases, these issues can be mitigated by partitioning the examples into groups that all take the same branch, and processing these groups of examples simultaneously. This can be an acceptable strategy for minimizing the time required to process a fixed amount of examples in an offline setting. In a real-time setting where examples must be processed continuously, partitioning the workload can result in load-balancing issues. For example, if we assign one machine to process the first step in a cascade and another machine to process the last step in a cascade, then the first will tend to be overloaded and the last will tend to be underloaded. Similar issues arise if each machine is assigned to implement different nodes of a neural decision tree. 12.1.6 Specialized Hardware Implementations of Deep Networks Since the early days of neural networks research, hardware designers have worked on specialized hardware implementations that could speed up training and/or inference of neural network algorithms. See early and more recent reviews of specialized hardware for deep networks ( , ; , Lindsey and Lindblad 1994 Beiu et al. 2003 Misra and Saha 2010 ; , ). Different forms of specialized hardware (Graf and Jackel 1989 Mead and , ; Ismail 2012 Kim 2009 Pham 2012 Chen 2014a b , ; et al., ; et al., ; et al., , ) have been developed over the last decades, either with ASICs (application-specific inte- grated circuit), either with digital (based on binary representations of numbers), analog (Graf and Jackel 1989 Mead and Ismail 2012 , ; , ) (based on physical imple- mentations of continuous values as voltages or currents) or hybrid implementations (combining digital and analog components). In recent years more flexible FPGA (field programmable gated array) implementations (where the particulars of the circuit can be written on the chip after it has been built) have been developed. Though software implementations on general-purpose processing units (CPUs and GPUs) typically use 32 or 64 bits of precision to represent floating point numbers, it has long been known that it was possible to use less precision, at least at inference time (Holt and Baker 1991 Holi and Hwang 1993 Presley , ; , ; and Haggard 1994 Simard and Graf 1994 Wawrzynek 1996 Savich , ; , ; et al., ; et al., 2007). This has become a more pressing issue in recent years as deep learning has gained in popularity in industrial products, and as the great impact of faster hardware was demonstrated with GPUs. Another factor that motivates current research on specialized hardware for deep networks is that the rate of progress of a single CPU or GPU core has slowed down, and most recent improvements in 451
  • 468. CHAPTER 12. APPLICATIONS computing speed have come from parallelization across cores (either in CPUs or GPUs). This is very different from the situation of the 1990s (the previous neural network era) where the hardware implementations of neural networks (which might take two years from inception to availability of a chip) could not keep up with the rapid progress and low prices of general-purpose CPUs. Building specialized hardware is thus a way to push the envelope further, at a time when new hardware designs are being developed for low-power devices such as phones, aiming for general-public applications of deep learning (e.g., with speech, computer vision or natural language). Recent work on low-precision implementations of backprop-based neural nets (Vanhoucke 2011 Courbariaux 2015 Gupta 2015 et al., ; et al., ; et al., ) suggests that between 8 and 16 bits of precision can suffice for using or training deep neural networks with back-propagation. What is clear is that more precision is required during training than at inference time, and that some forms of dynamic fixed point representation of numbers can be used to reduce how many bits are required per number. Traditional fixed point numbers are restricted to a fixed range (which corresponds to a given exponent in a floating point representation). Dynamic fixed point representations share that range among a set of numbers (such as all the weights in one layer). Using fixed point rather than floating point representations and using less bits per number reduces the hardware surface area, power requirements and computing time needed for performing multiplications, and multiplications are the most demanding of the operations needed to use or train a modern deep network with backprop. 12.2 Computer Vision Computer vision has traditionally been one of the most active research areas for deep learning applications, because vision is a task that is effortless for humans and many animals but challenging for computers ( , ). Many of Ballard et al. 1983 the most popular standard benchmark tasks for deep learning algorithms are forms of object recognition or optical character recognition. Computer vision is a very broad field encompassing a wide variety of ways of processing images, and an amazing diversity of applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating entirely new categories of visual abilities. As an example of the latter category, one recent computer vision application is to recognize sound waves from the vibrations they induce in objects visible in a video ( , Davis et al. 2014). Most deep learning research on computer vision has not focused on such 452
  • 469. CHAPTER 12. APPLICATIONS exotic applications that expand the realm of what is possible with imagery but rather a small core of AI goals aimed at replicating human abilities. Most deep learning for computer vision is used for object recognition or detection of some form, whether this means reporting which object is present in an image, annotating an image with bounding boxes around each object, transcribing a sequence of symbols from an image, or labeling each pixel in an image with the identity of the object it belongs to. Because generative modeling has been a guiding principle of deep learning research, there is also a large body of work on image synthesis using deep models. While image synthesis is usually not considered a ex nihilo computer vision endeavor, models capable of image synthesis are usually useful for image restoration, a computer vision task involving repairing defects in images or removing objects from images. 12.2.1 Preprocessing Many application areas require sophisticated preprocessing because the original input comes in a form that is difficult for many deep learning architectures to represent. Computer vision usually requires relatively little of this kind of pre- processing. The images should be standardized so that their pixels all lie in the same, reasonable range, like [0,1] or [-1, 1]. Mixing images that lie in [0,1] with images that lie in [0, 255] will usually result in failure. Formatting images to have the same scale is the only kind of preprocessing that is strictly necessary. Many computer vision architectures require images of a standard size, so images must be cropped or scaled to fit that size. Even this rescaling is not always strictly necessary. Some convolutional models accept variably-sized inputs and dynamically adjust the size of their pooling regions to keep the output size constant (Waibel et al., 1989). Other convolutional models have variable-sized output that automatically scales in size with the input, such as models that denoise or label each pixel in an image ( , ). Hadsell et al. 2007 Dataset augmentation may be seen as a way of preprocessing the training set only. Dataset augmentation is an excellent way to reduce the generalization error of most computer vision models. A related idea applicable at test time is to show the model many different versions of the same input (for example, the same image cropped at slightly different locations) and have the different instantiations of the model vote to determine the output. This latter idea can be interpreted as an ensemble approach, and helps to reduce generalization error. Other kinds of preprocessing are applied to both the train and the test set with the goal of putting each example into a more canonical form in order to reduce the amount of variation that the model needs to account for. Reducing the amount of 453
  • 470. CHAPTER 12. APPLICATIONS variation in the data can both reduce generalization error and reduce the size of the model needed to fit the training set. Simpler tasks may be solved by smaller models, and simpler solutions are more likely to generalize well. Preprocessing of this kind is usually designed to remove some kind of variability in the input data that is easy for a human designer to describe and that the human designer is confident has no relevance to the task. When training with large datasets and large models, this kind of preprocessing is often unnecessary, and it is best to just let the model learn which kinds of variability it should become invariant to. For example, the AlexNet system for classifying ImageNet only has one preprocessing step: subtracting the mean across training examples of each pixel (Krizhevsky et al., ). 2012 12.2.1.1 Contrast Normalization One of the most obvious sources of variation that can be safely removed for many tasks is the amount of contrast in the image. Contrast simply refers to the magnitude of the difference between the bright and the dark pixels in an image. There are many ways of quantifying the contrast of an image. In the context of deep learning, contrast usually refers to the standard deviation of the pixels in an image or region of an image. Suppose we have an image represented by a tensor X ∈ Rr c × ×3, with Xi,j,1 being the red intensity at row i and column j, Xi,j,2 giving the green intensity and Xi,j,3 giving the blue intensity. Then the contrast of the entire image is given by     1 3rc r  i=1 c  j=1 3  k=1  Xi,j,k − ¯ X 2 (12.1) where X̄ is the mean intensity of the entire image: X̄ = 1 3rc r  i=1 c  j=1 3  k=1 Xi,j,k. (12.2) Global contrast normalization (GCN) aims to prevent images from having varying amounts of contrast by subtracting the mean from each image, then rescaling it so that the standard deviation across its pixels is equal to some constant s. This approach is complicated by the fact that no scaling factor can change the contrast of a zero-contrast image (one whose pixels all have equal intensity). Images with very low but non-zero contrast often have little information content. Dividing by the true standard deviation usually accomplishes nothing 454
  • 471. CHAPTER 12. APPLICATIONS more than amplifying sensor noise or compression artifacts in such cases. This motivates introducing a small, positive regularization parameter λ to bias the estimate of the standard deviation. Alternately, one can constrain the denominator to be at least . Given an input image X, GCN produces an output image X , defined such that X  i,j,k = s Xi,j,k − X̄ max  ,  λ + 1 3rc r i=1 c j=1 3 k=1  Xi,j,k − X̄ 2  . (12.3) Datasets consisting of large images cropped to interesting objects are unlikely to contain any images with nearly constant intensity. In these cases, it is safe to practically ignore the small denominator problem by setting λ = 0 and avoid division by 0 in extremely rare cases by setting  to an extremely low value like 10−8. This is the approach used by ( ) on the CIFAR-10 Goodfellow et al. 2013a dataset. Small images cropped randomly are more likely to have nearly constant intensity, making aggressive regularization more useful. ( ) used Coates et al. 2011  λ = 0 and = 10 on small, randomly selected patches drawn from CIFAR-10. The scale parameter s can usually be set to , as done by ( ), 1 Coates et al. 2011 or chosen to make each individual pixel have standard deviation across examples close to 1, as done by ( ). Goodfellow et al. 2013a The standard deviation in equation is just a rescaling of the 12.3 L2 norm of the image (assuming the mean of the image has already been removed). It is preferable to define GCN in terms of standard deviation rather than L2 norm because the standard deviation includes division by the number of pixels, so GCN based on standard deviation allows the same s to be used regardless of image size. However, the observation that the L2 norm is proportional to the standard deviation can help build a useful intuition. One can understand GCN as mapping examples to a spherical shell. See figure for an illustration. This can be a 12.1 useful property because neural networks are often better at responding to directions in space rather than exact locations. Responding to multiple distances in the same direction requires hidden units with collinear weight vectors but different biases. Such coordination can be difficult for the learning algorithm to discover. Additionally, many shallow graphical models have problems with representing multiple separated modes along the same line. GCN avoids these problems by reducing each example to a direction rather than a direction and a distance. Counterintuitively, there is a preprocessing operation known as sphering and it is not the same operation as GCN. Sphering does not refer to making the data lie on a spherical shell, but rather to rescaling the principal components to have 455
  • 472. CHAPTER 12. APPLICATIONS −1 5 0 0 1 5 . . . x0 −1 5 . 0 0 . 1 5 . x 1 Raw input −1 5 0 0 1 5 . . . x0 GCN, = 10 λ −2 −1 5 0 0 1 5 . . . x0 GCN, = 0 λ Figure 12.1: GCN maps examples onto a sphere. (Left)Raw input data may have any norm. (Center)GCN with λ = 0 maps all non-zero examples perfectly onto a sphere. Here we use s = 1 and  = 10−8. Because we use GCN based on normalizing the standard deviation rather than the L2 norm, the resulting sphere is not the unit sphere. (Right)Regularized GCN, with λ > 0, draws examples toward the sphere but does not completely discard the variation in their norm. We leave and the same as before. s  equal variance, so that the multivariate normal distribution used by PCA has spherical contours. Sphering is more commonly known as . whitening Global contrast normalization will often fail to highlight image features we would like to stand out, such as edges and corners. If we have a scene with a large dark area and a large bright area (such as a city square with half the image in the shadow of a building) then global contrast normalization will ensure there is a large difference between the brightness of the dark area and the brightness of the light area. It will not, however, ensure that edges within the dark region stand out. This motivates local contrast normalization. Local contrast normalization ensures that the contrast is normalized across each small window, rather than over the image as a whole. See figure for a comparison of global and local contrast 12.2 normalization. Various definitions of local contrast normalization are possible. In all cases, one modifies each pixel by subtracting a mean of nearby pixels and dividing by a standard deviation of nearby pixels. In some cases, this is literally the mean and standard deviation of all pixels in a rectangular window centered on the pixel to be modified ( , ). In other cases, this is a weighted mean Pinto et al. 2008 and weighted standard deviation using Gaussian weights centered on the pixel to be modified. In the case of color images, some strategies process different color 456
  • 473. CHAPTER 12. APPLICATIONS Input image GCN LCN Figure 12.2: A comparison of global and local contrast normalization. Visually, the effects of global contrast normalization are subtle. It places all images on roughly the same scale, which reduces the burden on the learning algorithm to handle multiple scales. Local contrast normalization modifies the image much more, discarding all regions of constant intensity. This allows the model to focus on just the edges. Regions of fine texture, such as the houses in the second row, may lose some detail due to the bandwidth of the normalization kernel being too high. channels separately while others combine information from different channels to normalize each pixel ( , ). Sermanet et al. 2012 Local contrast normalization can usually be implemented efficiently by using separable convolution (see section ) to compute feature maps of local means and 9.8 local standard deviations, then using element-wise subtraction and element-wise division on different feature maps. Local contrast normalization is a differentiable operation and can also be used as a nonlinearity applied to the hidden layers of a network, as well as a preprocessing operation applied to the input. As with global contrast normalization, we typically need to regularize local contrast normalization to avoid division by zero. In fact, because local contrast normalization typically acts on smaller windows, it is even more important to regularize. Smaller windows are more likely to contain values that are all nearly the same as each other, and thus more likely to have zero standard deviation. 457
  • 474. CHAPTER 12. APPLICATIONS 12.2.1.2 Dataset Augmentation As described in section , it is easy to improve the generalization of a classifier 7.4 by increasing the size of the training set by adding extra copies of the training examples that have been modified with transformations that do not change the class. Object recognition is a classification task that is especially amenable to this form of dataset augmentation because the class is invariant to so many transformations and the input can be easily transformed with many geometric operations. As described before, classifiers can benefit from random translations, rotations, and in some cases, flips of the input to augment the dataset. In specialized computer vision applications, more advanced transformations are commonly used for dataset augmentation. These schemes include random perturbation of the colors in an image ( , ) and nonlinear geometric distortions of Krizhevsky et al. 2012 the input ( , ). LeCun et al. 1998b 12.3 Speech Recognition The task of speech recognition is to map an acoustic signal containing a spoken natural language utterance into the corresponding sequence of words intended by the speaker. Let X = (x(1), x(2), . . . , x( ) T ) denote the sequence of acoustic input vectors (traditionally produced by splitting the audio into 20ms frames). Most speech recognition systems preprocess the input using specialized hand-designed features, but some ( , ) deep learning systems learn features Jaitly and Hinton 2011 from raw input. Let y = (y1, y2, . . . , yN ) denote the target output sequence (usually a sequence of words or characters). The automatic speech recognition (ASR) task consists of creating a function f∗ ASR that computes the most probable linguistic sequence given the acoustic sequence : y X f∗ ASR( ) = arg max X y P ∗ ( = ) y X | X (12.4) where P∗ is the true conditional distribution relating the inputs X to the targets y. Since the 1980s and until about 2009–2012, state-of-the art speech recognition systems primarily combined hidden Markov models (HMMs) and Gaussian mixture models (GMMs). GMMs modeled the association between acoustic features and phonemes ( , ), while HMMs modeled the sequence of phonemes. Bahl et al. 1987 The GMM-HMM model family treats acoustic waveforms as being generated by the following process: first an HMM generates a sequence o