SlideShare a Scribd company logo
What is computer vision?
• The search of the fundamental visual features, and
the two fundamentals applications of reconstruction
and recognition
– Features, maps a 2D image into a vector or a point x
– Recognition
– Reconstruction
2012
• The current mania and euphoria of the AI revolution
– 2012, annual gathering, an improvement of 10% (from 75 to 85)
• Computer vision researchers use machine learning techniques to recognize
objects in large amount of images
– Go back to 1998 (1.5 decade!)
• Textual (hand-written and printed) is actually visual!
– And why wait for so long?
• A silent Hardware revolution: GPU
• Sadly driven by video gaming 
– Nvidia (GPU maker) is now in the driving seat of this AI revolution!
• 2016, AlphaGo beats professionals
• A narrow AI program
• Re-shaping AI, Computer Science, digital revolution …
Visual matching, and recognition for
understanding
• Finding the visually similar things in different images
--- Visual similarities
• Visual matching, find the ‘same’ thing under different
viewpoints, better defined, no semantics per se.
• Visual recognition, find the pre-trained ‘labels’,
semantics
– We define ‘labels’, then ‘learn’ from labeled data, finally
classify ‘labels’
The state-of-the-art of visual
classification and recognition
• Any thing you can clearly define and label
• Then show a few thousands examples (labeled data)
of this thing to the computer
• A computer recognizes a new image, not seen
before, now as good as humans, even better!
• This is done by deep neural networks.
References
 In the notes.pptx, some advanced topics
 In talks, some vision history
 CNN for Visual Recognition, Stanford http://cs231n.github.io/
 Deep Learning Tutorial, LeNet, Montreal
http://www.deeplearning.net/tutorial/mlp.html
 Deep Learning, Goodfellow, Bengio, and Courville
 Pattern Recognition and Machine Learning, Bishop
 Sparse and Redundant Representations, Elad
 Pattern Recognition and Neural Networks, Ripley
 Pattern Classification, Duda, Hart, different editions
 A wavelet tour of signal processing, a sparse way, Mallat
 Introduction to applied mathematics, Strang
Some figures and texts in the slides are cut/paste from these references.
A few basics
7
prod.
dot
with
n
n
A
E  inf.
at
pts
n
n

R
P
n
A
n
R



Naturally everything starts from the known vector space
• add two vectors
• multiply any vector by any scalar
• zero vector – origin
• finite basis
Vector spaces and geometries
8
• Vector space to affine: isomorph, one-to-one
• vector to Euclidean as an enrichment: scalar prod.
• affine to projective as an extension: add ideal elements
Pts, lines, parallelism
Angle, distances, circles
Pts at infinity
9
Numerical computation and Linear
algebra
• Gaussian elimination
• LU decomposition
• Choleski (sym positive) LL^T
• orthogonal decomposition
• QR (Gram-Schmidt)
• SVD (the high(est)light of linear algebra!)
T
V
U
A 


n
m
• row space: first Vs
• null space: last Vs
• col space: first Us
• null space of the trans : last Us
10
Applications
• solving equations A x = b, decomposition, SVD
• less, constraints, sparse solutions
• more, least squares solution, ||Ax – b||^2 is eigendecomposition or S
• solving linear equation, also iteratively nonlinear one f(x), or optimi
• PCA is SVD
11
High dimensionality discussion ---
curse
• [0,1]^d
•…
Visual recognition
Image classification and object
recognition
• Viewing an image as a one dimensional vector of features x, thanks to features! (or simply pixels if you
would like),
• Image classification and recognition becomes a machine learning application f(x).
• What are good features?
• Unsupervised, given examples x_i of a random variable x, implicitly or explicitly we learn the
distributions p(x), or some properties of p(x)
– PCA Principal Components Analysis
– K-means clustering
• Given a set of training images and labels, we predict labels of a test image.
• Supervised, given examples x_i and associated values or labels y_i, we learn to predict a y from a x,
usually by estimating p(y|x)
– KNN K-Nearest Neighbors
• Distance metric and the (hyper) parameter K
• Slow by curse of dimension
– From linear classifiers to nonlinear neural networks
Unsupervised
• K-means clustering
– Partition n data points into k clusters
– NP hard
– heuristics
• PCA Principal Components Analysis
– Learns an orthogonal, linear transformation of the data
Introduction to Convolutional Neural Network.pptx
Supervised
• Given a set of training images and labels, we predict
labels of a test image.
• Supervised learning algorithms
– KNN K-Nearest Neighbors
• Distance metric and the (hyper) parameter K
• Slow by curse of dimension
– From linear classifiers to nonlinear neural networks
Introduction to Convolutional Neural Network.pptx
Fundamental linear classifiers
Fundamental binary linear
classifiers
 Binary linear classifier
 The classification surface is a hyper-plane
 Decision function could be a nonlinear thresholding d(x_i,wx+b),
 Nonlinear distance function, or probability-like sigmoid
 Geometry, 3d, and n-d
 Linear algebra, linear space
 Linear classifiers
 The linear regression is the simplest case where we have a close
form solution
A neuron is a linear classifier
• A single neuron is a linear classifier, where the decision function is the
activation function
• w x + b, a linear classifier, a neuron,
– It’s a dot product of two vectors, scalar product
– A template matching, a correlation, the template w and the input vector x (or a
matched filter)
– Also an algebraic distance, not the geometric one which is non-linear (therefore
the solution is usually nonlinear!)
• The dot product is the metric distance of two points, one data, the other representative
• its angular distance, x . w = ||x|| || w|| cos theta, true distance d = ||x-w||^2 = cosine
law = ||x||^2 + ||w||^2- 2 ||x|| ||w|| x . x = x .x + w.w – 2 x.x w.w x . w
– The ‘linear’ is that the decision surface is linear, a hyper-plane. The decision
function is not linear at all
A biological neuron and its mathematical model.
A very simple old example
• The chroma-key segmentation, we want to remove
the blue or green background in images
• Given an image RGB, alpha R + betta G + gamma B >
threshold or 0.0 R + 0.0 G + 1.0 B > 100
A linear classifier is not that
straightforward
• Two things: Inference and Learning
• Only inference ‘scoring’ function is linear
– We have only one ‘linear’ classifer, but we do have different ways to define the loss to make the
learning feasible. The learning is a (nonlinear) optimization problem in which we define a loss
function.
• Learning is almost never linear!
– How to compute or to learn this hyperplane, and how to assess which one is better? To define an
objective ‘loss’ function
• The classification is ‘discrete’, binary 0, 1,
• No ‘analytical’ forms of the loss functions, except the true linear regression with y-y_i
• It should be computationally feasible
– The logistic regression (though it is a classification, not a regression), converts the output of the linear
function into a probability
– SVM
Activation (nonlinearity) functions
• ReLU, max(0,x)
• Sigmoid logistic function ), normalized to between 0
and 1, is naturallly probability-like between 0 and 1,
– so naturally, sigmoid for two,
– (and softmax for N, )
– Activation function and Nonlinearity function, not
necessarily logistic sigmoid between 0 and 1, others like
tanh (centered), relu, …
• Tanh, 2 s(2x) – 1, centered between -1 and 1, better,
VisGraph, HKUST
The rectified linear unit, ReLU
• The smoother non-linearities used to be favoured in
the past.
• Sigmoid, kills gradients, not used any more
• At present, ReLU is the most popular.
• Easy to implement max(0,x)
• It learns faster with many layers, accelerates the
learning by a factor of 6 (in Alex)
• Not smooth at x=0, subderivative is a set for a convex
function, left and right derivatives, set to zero for
simplicity and sparsity
From two to N classes
• Classification f : R^n  (1,2,…,n), while Regression f:
R^n  R
• Multi-class, output a vector function = ( +
𝒙′ 𝑔 𝑾 𝒙
), where g is the rectified linear.
𝒃
• W x + b, each row is a linear classifier, a neuron
The two common linear classifiers,
with different loss functions
• SVM, uncalibrated score
– A hinge loss, the max-margin loss, A loss for j is different from y
– Computationally more feasible, leads to convex optimization
• Softmax f(*), the normalized exponentials, = ( )= ( ( + ))
𝒚 𝑓 𝒙′ 𝑓 𝑔 𝑾 𝒙 𝒃
– multi-class logistic regression
– The scores are the unnormalized log probabilities
– the negative log-likelihood loss, then gradient descent
– (1,-2,0) -> exp, (2.71,0.14,1) -> ln, (0.7,0.04,0.26)
– ½(1,-2,0) = (0.5,-1,0) -> exp, (1.65,0.37,1) ->ln, (0.55,0.12,0.33), more uniform
with increasing regularization
• They are usually comparable
From linear to non-linear
classifiers, Multi Layer Perceptrons
• Go higher and linear!
• find a map or a transform ( ) to make them linear, but
𝒙↦𝜙 𝒙
in higher dimensions
– A complete basis of polynomials  too many parameters for the
limited training data
– Kernel methods, support vector machine, …
• Learn the nonlinearity at the same time as the linear
classifiers  multilayer neural networks
Multi Layer Perceptrons
• The first layer
• The second layer
• One-layer, g^1(x), linear classifiers
• Two-layers, one hidden layer --- universal nonlinear approximator
• The depth is the number of the layers, n+1, f g^n … g^1
• The dimension of the output vector h is the width of the layer
• A N-layer neural network does not count the input layer x
• But it does count the output layer f(*). It represents the class scores
vector, it does not have an activation function, or the identify
activation function.
A 2-layer Neural Network, one hidden
layer of 4 neurons (or units), and one
output layer with 2 neurons, and three
inputs.
The network has 4 + 2 = 6 neurons (not
counting the inputs), [3 x 4] + [4 x 2] = 20
weights and 4 + 2 = 6 biases, for a total
of 26 learnable parameters.
A 3-layer neural network with three
inputs, two hidden layers of 4 neurons
each and one output layer. Notice that in
both cases there are connections
(synapses) between neurons across
layers, but not within a layer.
The network has 4 + 4 + 1 = 9 neurons,
[3 x 4] + [4 x 4] + [4 x 1] = 12 + 16 + 4 = 32
weights and 4 + 4 + 1 = 9 biases, for a
total of 41 learnable parameters.
Universal approximator
• Given a function, there exisits a feedforward network
that approximates the function
• A single layer is sufficient to represent any function,
• but it may be infeasibly large,
• and it may fail to learn and generalize correctly
• Using deeper models can reduce the number of units
required to represent the function, and can reduce
the amount of generalization error
Functional summary
 The input , the output y
 The hidden features
 The hidden layers is a nonlinear transformation
 provides a set of features describing
 Or provides a new representation for
 The nonlinear is compositional
 Each layer ), g(z)=max(0,z).
 The output
MLP is not new, but deep feedforward
neural network is modern!
• Feedforward, from x to y, no feedback
• Networks, modeled as a directed acyclic grpah for
the composition of functions connected in a chain,
f(x) = f^(i)( … f^3(f^2(f^1(x))))
• The depth of the network is N
• ‘deep learning’ as N is increasing.
Forward inference f(x) and
backward learning nabla f(x)
• A (parameterized) score function f(x,w) mapping the
data to class score, forward inference, modeling
• A loss function (objective) L measuring the quality of
a particular set of parameters based on the ground
truth labels
• Optimization, minimize the loss over the parameters
with a regularization, backward learning
The dataset of pairs of (x,y) is given and fixed. The weights start out as
random numbers and can change. During the forward pass the score
function computes class scores, stored in vector f. The loss function
contains two components: The data loss computes the compatibility
between the scores f and the labels y. The regularization loss is only a
function of the weights. During Gradient Descent, we compute the
gradient on the weights (and optionally on data if we wish) and use
them to perform a parameter update during Gradient Descent.
Cost or loss functions
• Usually the parametric model f(x,theta) defines a distribution p(y | x; theta) and we
use the maximum likelihood, that is the cross-entropy between the training data y and
the model’s predictions f(x,theta) as the loss or the cost function.
– The cross-entropy between a ‘true’ distribution p and an estimated distribution q is H(p,q) = - 
sum_x p(x) log q(x)
• The cost J(theta) = - E log p (y|x), if p is normal, we have the mean squared error cost J
= ½ E ||y-f(x; theta)||^2+const
• The cost can also be viewed as a functional, mapping functions to real numbers, we
are learning functions f parameterized by theta. By calculus of variations, f(x) = E_* [y]
• The SVM loss is carefully designed and special, hinge loss, max margin loss,
• The softmax is the cross-entropy between the estimated class probabilities e^y_i / 
sum e and the true class labels, also the negative log likelihood loss L_i = - log
(e^y_i / sum e)
MLE, likelihood and probabity discussions
KL, information theory, ….
Gradient-based learning
• Define the loss function
• Gradient-based optimization with chain rules
• z=f(y)=f(g(x)), dz/dx=dz/dy dy/dx
• In vectors and , the gradient , where J is the Jocobian matrix of g
• In tensors, back-propagation
• Analytical gradients are simple
– d max(0,x)/d x = 1 or 0, d sigmoid(x)/d x = (1 – sig) sig
• Use the centered difference f(x+h)-f(x-h)/2h, error order of O(h^2)
Stochastic gradient descent
• min loss(f(x;theta)), function of parameters theta,
not x
• min f(x)
• Gradient descent or the method of steepest descent
x’ = x - epsilon nabla_x f(x)
• Gradient descent, follows the gradient of an entire
training set downhill
• Stochastic gradient descent, follows the gradient of
randomly selected minbatches downhill
Monte Carlo discussions ….
Auto-differentiation discussions …
Regularization
 loss term + lambda regularization term
 Regularization as constraints for underdetermined system
 Eg. A^T A + alpha I, Solving linear homogeneous Ax = 0, ||Ax||^2 with ||x||^2=1
 Regularization for the stable uniqueness solution of ill-posed problems (no unique solution)
 Regularization as prior for MAP on top of the maximum likelihood estimation, again all
approximations to the full Bayesian inference
 L1/2 regularization
 L1 regularization has ‘feature selection’ property, that is, it produces a sparse vector,
setting many to zeros
 L2 is usually diffuse, producing small numbers.
 L2 superior is not explicitly selecting features
Architecture prior and inductive
biases discussions ...
Hyperparameters and validation
• The hyper-parameter lambda
• Split the training data into two disjoint subsets
– One is to learn the parameters
– The other subset, the validation set, is to guide the
selection of the hyperparameters
A linearly separable toy example
• The toy spiral data consists of three classes (blue, red, yellow) that are not
linearly separable.
– 300 pts, 3 classes
• Linear classifier fails to learn the toy spiral dataset.
• Neural Network classifier crushes the spiral dataset.
– One hidden layer of width 100
A toy example from cn231n
• The toy spiral data consists of three classes (blue, red, yellow) that are not linearly
separable.
– 3 classes, 100 pts for each class
• Softmax linear classifier fails to learn the toy spiral dataset.
– One layer, W, 2*3, b,
– analytical gradients, 190 iteration, loss from 1.09 to 0.78, 48% training set accuracy
• Neural Network classifier crushes the spiral dataset.
– One hidden layer of width 100, W1, 2*100, b1, W2, 100*3, only few extra lines of python codes!
– 9000 iteration, loss from 1.09 to 0.24, 98% training set accuracy
Generalization
• The ‘optimization’ reduces the training errors (or residual errors before)
• The ‘machine learning’ wants to reduce the generalization error or the
test error as well.
• The generalization error is the expected value of the error on a new input,
from the test set.
– Make the training error small.
– Make the gap between training and test error small.
• Underfitting is not able to have sufficiently low error on the training set
• Overfitting is not able to narrow the gap between the training and the
test error.
The training data was generated synthetically, by randomly sampling x values and choosing y deterministically
by evaluating a quadratic function. (Left) A linear function fit to the data suffers from
underfitting. (Center) A quadratic function fit to the data generalizes well to unseen points. It does not suffer from
a significant amount of overfitting or underfitting. (Right) A polynomial of degree 9 fit to
the data suffers from overfitting. Here we used the Moore-Penrose pseudoinverse to solve
the underdetermined normal equations. The solution passes through all of the training
points exactly, but we have not been lucky enough for it to extract the correct structure.
The capacity of a model
• The old Occam’s razor. Among competing ones, we should choose the
“simplest” one.
• The modern VC Vapnik-Chervonenkis dimension. The largest possible
value of m for which there exists a training set of m different x points that
the binary classifier can label arbitrarily.
• The no-free lunch theorem. No best learning algorithms, no best form of
regularization. Task specific.
Practice: learning rate
Left: A cartoon depicting the effects of different learning rates. With low learning rates the
improvements will be linear. With high learning rates they will start to look more exponential.
Higher learning rates will decay the loss faster, but they get stuck at worse values of loss
(green line). This is because there is too much "energy" in the optimization and the
parameters are bouncing around chaotically, unable to settle in a nice spot in the optimization
landscape. Right: An example of a typical loss function over time, while training a small
network on CIFAR-10 dataset. This loss function looks reasonable (it might indicate a slightly
too small learning rate based on its speed of decay, but it's hard to say), and also indicates
that the batch size might be a little too low (since the cost is a little too noisy).
Practice: avoid overfitting
The gap between the training and validation accuracy
indicates the amount of overfitting. Two possible cases
are shown in the diagram on the left. The blue validation
error curve shows very small validation accuracy
compared to the training accuracy, indicating strong
overfitting. When you see this in practice you probably
want to increase regularization or collect more data. The
other possible case is when the validation accuracy tracks
the training accuracy fairly well. This case indicates that
your model capacity is not high enough: make the model
larger by increasing the number of parameters.
Why deeper?
• Deeper networks are able to use far fewer units per layer
and far fewer parameters, as well as frequently
generalizing to the test set
• But harder to optimize!
• Choosing a deep model encodes a very general belief that
the function we want to learn involves composition of
several simpler functions. Or the learning consists of
discovering a set of underlying factors of variation that can
in turn be described in terms of other, simpler underlying
factors of variation.
• The core idea in deep learning is that we assume that
the data was generated by the composition factors or
features, potentially at multiple levels in a hierarchy.
• This assumption allows an exponential gain in the
relationship between the number of examples and
the number of regions that can be distinguished.
• The exponential advantages conferred by the use of
deep, distributed representations counter the
exponential challenges posed by the curse of
dimensionality.
Curse of dimensionality
Convolutional Neural Networks, or CNN,
a visual network, and back to a 2D lattice from the abstraction of the 1D feature vector x in neural networks
From a regular network to CNN
• We have regarded an input image as a vector of features, x, by virtue of
feature detection and selection, like other machine learning applications to
learn f(x)
• We now regard it as is, a 2D image, a 2D grid, a topological discrete lattice,
I(I,j)
• Converting input images into feature vector loses the spatial neighborhood-
ness
• complexity increases to cubics
• Yet, the connectivities become local to reduce the complexity!
What is a convolution?
• The fundamental operation, the convolution I * K(I,j) = sum 
sum
• Flipping kernels makes the convolution commutative, which is
fundamental in theory, but not required in NN to compose
with other functions, so to include discrete correlations as well
into “convolution”
• Convolution is a linear operator, dot-product like correlation,
not a matrix multiplication, but can be implemented as a
sparse matrix multiplication, to be viewed as an affine
transform
A CNN arranges its neurons in three dimensions (width, height, depth).
Every layer of a CNN transforms the 3D input volume to a 3D output
volume. In this example, the red input layer holds the image, so its
width and height would be the dimensions of the image, and the depth
would be 3 (Red, Green, Blue channels).
A regular 3-layer Neural Network.
LeNet: a layered model composed of convolution and
subsampling operations followed by a holistic
representation and ultimately a classifier for
handwritten digits. [ LeNet ]
Convolutional Neural Networks: 1998. Input 32*32. CPU
AlexNet: a layered model composed of convolution,
subsampling, and further operations followed by a
holistic representation and all-in-all a landmark
classifier on
ILSVRC12. [ AlexNet ]
+ data
+ gpu
+ non-saturating nonlinearity
+ regularization
Convolutional Neural Networks: 2012. Input 224*224*3. GPU.
LeNet vs AlexNet
• 32*32*1
• 7 layers
• 2 conv and 4 classification
• 60 thousand parameters
• Only two complete convolutional layers
– Conv, nonlinearities, and pooling as one complete
layer
• About 17 k parameters
• (from 1989 LeNet 1 of about 10 k parameters to
1998 LeNet)
• 224*224*3
• 8 layers
• 5 conv and 3 fully classification
• 5 convolutional layers, and 3,4,5 stacked on top of
each other
• Three complete conv layers
• 60 million parameters, insufficient data
• Data augmentation:
– Patches (224 from 256 input), translations, reflections
– PCA, simulate changes in intensity and colors
The motivation of convolutions
• Sparse interaction, or Local connectivity.
– The receptive field of the neuron, or the filter size.
– The connections are local in space (width and height), but
always full in depth
– A set of learnable filters
• Parameters sharing, the weights are tied
• Equivariant representation, translation invariant
Convolution and matrix
multiplication
• Discrete convolution can be viewed as multiplication
by matrix
• The kernel is a doubly block circulant matrix
• It is very sparse!
VisGraph, HKUST
The ‘convolution’ operation
• The convolution is commutative because we have flipped the kernel
– Many implement a cross-correlation without flipping
• A convolution can be defined for 1, 2, 3, and N D
– The 2D convolution is different from a real 3D convolution, which integrates the spatio-
temporal information, the standard CNN convolution has only ‘spatial’ spreading
• In CNN, even for 3D RGB input images, the standard convolution is 2D in each
channel,
– each channel has a different filter or kernel, the convolution per channel is then
summed up in all channels to produce a scalar for non-linearity activation
– The filiter in each channel is not normalized, so no need to have different linear
combination coefficients.
– 1*1 convolution is a dot product in different channel, a linear combination of different
chanels
• The backward pass of a convolution is also a convolution with spatially flipped
filters.
VisGraph, HKUST
The convolution layers
• Stacking several small convolution layers is different from
convolution cascating
– As each small convolution is followed by the nonlinearities ReLU
– The nonlinearities make the features more expressive!
– Have fewer parameters with small filters, but more memory.
– Cascating simply enlarges the spatial extent, the receptive field
• Whether each conv layer is also followed by a pooling?
– Lenet does not!
– AlexNet first did not.
VisGraph, HKUST
The Pooling Layer
• Reduce the spatial size
• Reduce the amount of parameters
• Avoid over-fitting
• Backpropagation for a max: only routing the gradient to
the input that had the highest value in the forward pass
• It is unclear whether the pooling is essential.
Pooling layer down-samples the volume spatially, independently in each
depth slice of the input volume.
Left: the input volume of size [224x224x64] is pooled with filter size 2,
stride 2 into output volume of size [112x112x64]. Notice that the volume
depth is preserved.
Right: The most common down-sampling operation is max, giving rise
to max pooling, here shown with a stride of 2. That is, each max is taken
over 4 numbers (little 2x2 square).
The spatial hyperparameters
• Depth
• Stride
• Zero-padding
AlexNet 2012
• A strong prior has very low entropy, e.g. a Gaussian
with low variance
• An infinitely strong prior says that some parameters
are forbidden, and places zero probability on them
• The convolution ‘prior’ says the identical and zero
weights
• The pooling forces the invariance of small translations
The convolution and pooling act as
an infinitely strong prior!
• A strong prior has very low entropy, e.g. a Gaussian
with low variance
• An infinitely strong prior says that some parameters
are forbidden, and places zero probability on them
• The convolution ‘prior’ says the identical and zero
weights
• The pooling forces the invariance of small translations
The neuroscientific basis for CNN
• The primary visual cortex, V1, about which we know the most
• The brain region LGN, lateral geniculate nucleus, at the back of the head carries the
signal from the eye to V1, a convolutional layer captures three aspects of V1
– It has a 2-dimensional structure
– V1 contains many simple cells, linear units
– V1 has many complex cells, corresponding to features with shift invariance, similar to pooling
• When viewing an object, info flows from the retina, through LGN, to V1, then onward to
V2, then V4, then IT, inferotemporal cortex, corresponding to the last layer of CNN
features
• Not modeled at all. The mammalian vision system develops an attention mechanism
– The human eye is mostly very low resolution, except for a tiny patch fovea.
– The human brain makes several eye movements saccades to salient parts of the scene
– The human vision perceives 3D
• A simple cell responds to a specific spatial frequency of brightness in a specific
direction at a specific location --- Gabor-like functions
Receptive field
Left: An example input volume in red (e.g. a 32x32x3 CIFAR-10 image), and an example
volume of neurons in the first Convolutional layer. Each neuron in the convolutional layer is
connected only to a local region in the input volume spatially, but to the full depth (i.e. all
color channels). Note, there are multiple neurons (5 in this example) along the depth, all
looking at the same region in the input - see discussion of depth columns in text below.
Right: The neurons from the Neural Network chapter remain unchanged: They still
compute a dot product of their weights with the input followed by a non-linearity, but their
connectivity is now restricted to be local spatially.
Receptive field
Gabor functions
Gabor-like learned features
CNN architectures and algorithms
CNN architectures
• The conventional linear structure, linear list of layers, feedforward
• Generally a DAG, directed acyclic graph
• ResNet simply adds back
• Different terminology: complex layer and simple layer
– A complex (complete) convolutional layer, including different stages such as
convolution per se, batch normalization, nonlinearity, and pooling
– Each stage is a layer, even there are no parameters
• The traditional CNNs are just a few complex convolutional layers to extract
features, then are followed by a softmax classification output layer
• Convolutional networks output a high-dimensional, structured object,
rather than just predicting a class label for a classiciation task or a real
valuefor a regression task, it it an output tensor
– S_i,j,k is the probability that pixel j,k belongs to class I
The popular CNN
• LeNet, 1998
• AlexNet, 2012
• VGGNet, 2014
• ResNet, 2015
VGGNet
• 16 layers
• Only 3*3
convolutions
• 138 million
parameters
ResNet
• 152 layers
• ResNet50
Computational complexity
• The memory bottleneck
• GPU, a few GB
Stochastic Gradient Descent
• Gradient descent, follows the gradient of an
entire training set downhill
• Stochastic gradient descent, follows the
gradient of randomly selected minbatches
downhill
The dropout regularization
• Randomly shutdown a subset of units in training
• It is a sparse representation
• It is a different net each time, but all nets share the parameters
– A net with n units can be seen as a collection of 2^n possible thinned nets,
all of which share weights.
– At test time, it is a single net with averaging
• Avoid over-fitting
Ensemble methods
• Bagging (bootstrap aggregating), model averaging,
ensemble methods
– Model averaging outperforms, with increased computation
and memory
– Model averaging is discouraged when benchmarking for
publications
• Boosting, an ensemble method, constructs an
ensemble with higher capacity than the individual
models
Batch normalization
• After convolution, before nonlinearity
• ‘Batch’ as it is done for a subset of data
• Instead of 0-1 (zero mean unit variance) input, the
distribution will be learnt, even undone if necessary
• The data normalization or PCA/whitening is common in
general NN, but in CNN, the ‘normalization layer’ has
been shown to be minimal in some nets as well.
Open questions
• Networks are non-convex
– Need regularization
• Smaller networks are hard to train with local
methods
– Local minima are bad, in loss, not stable, large variance
• Bigger ones are easier
– More local minima, but better, more stable, small variance
• As big as the computational power, and data!
Local minima, saddle points, and
plateaus?
• We don’t care about finding the exact mimimum, we only
want to obtain good generalization error by reducing the
function value.
• In low-dimensional, local minima are common.
• In higher dimensional, local minima are rare, and saddle
points are more common.
– The Hessian at a local mimimum has all positive eigenvalues. The
Hessian at a saddle point has a micture of pos and neg eigenvalues.
– It is exponentially unlikely that all n will have the same sign in high
n-dim
CNN applications
CNN applications
• Transfer learning
• Fine-tuning the CNN
– Keep some early layers
• Early layers contain more generic features, edges, color blobs
• Common to many visual tasks
– Fine-tune the later layers
• More specific to the details of the class
• CNN as feature extractor
– Remove the last fully connected layer
– A kind of descriptor or CNN codes for the image
– AlexNet gives a 4096 Dim descriptor
VisGraph, HKUST
CNN classification/recognition nets
• CNN layers and fully-connected classification layers
• From ResNet to DenseNet
– Densely connected
– Feature concatenation
Fully convolutional nets: semantic
segmentation
• Classification/recognition nets produce ‘non-spatial’
outputs
– the last fully connected layer has the fixed dimension of
classes, throws away spatial coordinates
• Fully convolutional nets output maps as well
Semantic segmentation
Using sliding windows for semantic
segmentation
Fully convolutional
Fully convolutional
VisGraph, HKUST
Detection and segmentation nets:
The Mask Region-based CNN (R-CNN):
• Class-independent region (bounding box) proposals
– From selective search to region proposal net with objectness
• Use CNN to class each region
• Regression on the bounding box or contour segmentation
• Mask R-CNN: end-to-end
– Use CNN to make proposals on object/non-object in parallel
• The old good idea of face detection by Viola
– Proposal generation
– Cascading (ada boosting)
Using sliding windows for object
detection as classification
Mask R-CNN
Excellent results
End.
VisGraph, HKUST
Some old notes in 2017 and 2018
Fundamentally from continuous to discrete
views … from geometry to recognition
• ‘Simple’ neighborhood from topology
• discrete high order
• Modeling higher-order discrete, but yet solving it with
the first-order differentiable optimization
• Modeling and implementation become easier
• The multiscale local jet, a hierarchical and local
characterization of the image in a full scale-space
neighborhood
VisGraph, HKUST
Local jets? Why?
• The multiscale local jet, a hierarchical and local
characterization of the image in a full scale-space
neighborhood
• A feature in CNN is one element of the descriptor
Classification vs regression
• Regression predicts a value from a continuous set, a
real number/continuous,
– Given a set of data, find the relationship (often, the
continuous mathematical functions) that represent the set of
data
– To predict the output value using training data
• Whereas classification predicts the ‘belonging’ to the
class, a discrete/categorial variable
– Given a set of data and classes, identify the class that the
data belongs to
– To group the output into a class
Autoencoder and decoder
• Compression, and reconstruction
• Convolutional autoencoder, trained to reconstruct its
input
– Wide and thin (RGB) to narrow and thick
• ….
Convolution and deconvolution
• The convolution is not inversible, so there is no strict
definition, or the closed-form of the so-called
‘deconvolution’
• In iterative procedure, a kind of the convolution
transpose is applied, so to call it ‘ deconvolution’
• The ‘deconvolution’ filters are reconstruction bases
CNN as a natural features and
descriptors
• Each point is an interest point or feature point with
high-dimensional descriptors
• Some of them are naturally ‘unimportant’, and
weighted down by the nets
• The spatial information is kept through the nets
• The hierarchical description is natural from local to
global for each point, each pixel, and each feature
point
Traditional stereo vs deep stereo
regression
VisGraph, HKUST
CNN regression nets: deep
regression stereo
• Regularize the cost volume.
Traditional stereo
• Input image H * W * C
• (The matching) cost volume in disparities D, or in depths
• D * H * W
• The value d_i for each D is the matching cost, or the correlation
score, or the accumulated in the scale space, for the disparity i,
or the depth i.
• Disparities are a function of H and W, d = c(x,y;x+d,y+d)
• Argmin D
• H * W
End-to-end deep stereo regression
• Input image H * W * C
• 18 CNN
• H * W * F
• (F, features, are descriptor vectors for each pixel, we may just correlate or dot-product two descriptor vectors
f1 and f2 to produce a score in D*H*W. But F could be further redefined in successive convolution layers.)
• Cost volume, for each disparity level
• D * H * W * 2F
• 4D volume, viewed as a descriptor vector 2F for each voxel D*H*W
• 3D convolution on H, W, and D
• 19-37 3D CNN
• The last one (deconvolution) turns F into a scalar as a score
• D * H * W
• Soft argmin D
• H * W
Bayesian decision
 posterior = likelihood * prior / evidence
 Decide if > ; otherwise decide
Statistical measurements
 Precision
 Recall
 coverage
VisGraph, HKUST
Reinforcement learning (RL)
• Dynamic programming uses a mathematical model of the
environment, which is typically formulated as a Markov
Decision Process
• The main difference between the classical dynamic
programming and RL is that RL does not assume an exact
mathematical model of the Markov Decision Process, and
target large MDP where exact methods become infeasible
• Therefore, RL is neuro dynamic programming. In
operational research and control, it’s called approximate
dynamic programming
116
Non-linear iterative optimisation
• J d = r from vector F(x+d)=F(x)+J d
• minimize the square of y-F(x+d)=y-F(x)-J d = r – J d
• normal equation is J^T J d = J^T r (Gauss-Newton)
• (H+lambda I) d = J^T r (LM)
Note: F is a vector of functions, i.e. min f=(y-F)^T(y-F)
117
General non-linear optimisation
• 1-order , d gradient descent d= g and H =I
• 2-order,
• Newton step: H d = -g
• Gauss-Newton for LS: f=(y-F)^T(y-F), H=J^TJ, g=-J^T r
• ‘restricted step’, trust-region, LM: (H+lambda W) d = -g
R. Fletcher: practical method of optimisation
f(x+d) = f(x)+g^T d + ½ d^T H d
Note: f is a scalar valued function here.
118
statistics
• ‘small errors’ --- classical estimation theory
• analytical based on first order appoxi.
• Monte Carlo
• ‘big errors’ --- robust stat.
• RANSAC
• LMS
• M-estimators
Talk about it later
An abstract view
 The input
 The classification with linear models,
 Can be a SVM
 The output layer of the linear softmax classifier
 find a map or a transform to make them linear, but in higher
dimensions
 provides a set of features describing
 Or provides a new representation for
 The nonlinear transformation
 Can be hand designed kernels in svm
 It is the hidden layer of a feedforward network
 Deep learning
 To learn
 is compositional, multiple layers
 CNN is convolutional for each layer
VisGraph, HKUST
• Add drawing for ‘receptive fields’
• Dot product for vectors, convolution, more specific for 2D? Time
invariant, or translation invariant, equivariant
• Convolution, then nonlinear acitivation, also called ‘detection stage’,
detector
• Pooling is for efficiency, down-sampling, and handling inputs of
variable sizes,
To do
Cost or loss functions - bis
• Classification and regression are different! for different application communities
• We used more traditional regression, but the modern one is more on classification, so resulting
different loss considerations
• Regression is harder to optimize, while classification is easier, and more stable
• Classification is easier than regression, so always discretize and quantize the output, and convert
them into classification tasks!
• One big improvement in NN modern development is that the cross-entropy dominates the mean
squred error L2, as the mean squared error was popular and good for regression, but not that
good for NN. because of its fundamental more appropriate distribution assumption, not normal
distributions
• L2 for regression is harder to optimize than the more stable softmax for classification
• L2 is also less robust
Automatic differentiation (algorithmic diff),
and backprop, its role in the development
• Differentiation: symbolic or numerical (finite differences)
• Automatic differentiation is to compute derivatives algorithmically,
backprop is only one approach to it.
• Its history is related to that of NN and deep learning
• Worked for traditional small systems, a f(x)
• Larger and more explicit composional nature of f1(f2(f3(… fn(x)))) goes
back to the very nature of the derivatives
– Forward mode and reverse mode (it is based on f(a+b epsilon) =f(a)+f’(a)b
epsilon, f(g(a+b epsilon) = f(g(a))+f’(g(a))g’(a) b epsilon)
– The reverse mode is backprop for NN
• In the end, it benefits as well the classical large optimization such as
bundle adjustment
Viewing the composition of an arbitrary
function as a natural layering
• Take f(x,y)=x+s(y) / s(x) + (x+y)^2, at a given point x=3,y=-4
• Forward pass
• f1=s(y), f2 = x+f1, f3=s(x), f4=x+y, f5=f4^2, f6=f3+f5, f7=1/f6, f8=f2*f7
• So f(*)=f8(f7(f6(f5(f4(f3(f2(f1(*)))))))), each of fn is a known elementary function or
operation
• Backprop to get (df/dx, df/dy), or abreviated as (dx,dy), at (3,-4)
• f8=f, abreviate df/df7 or df8/df7 as df7, df/df7=f2,…, and df/dx as dx, …
• df7=f2, (df2=f7), df6= (-1/f6^2) * df7, df5=df6, (df3=df6), df4=(2*f4)df5, dx=df4, dy=df4, dx
+= (1-s(x)*s(x)*df3 (backprop in s(x)=f3), dx += df2 (backprop in f2), dy += df2 (backprop in
f2), dy += (1-s(y))*s(y)*df1 (backprop in s(y)=f1)
• In NN, there are just more variables in each layer, but the elementary functions are much
simpler: add, multiply, and max.
• Even the primitive function in each layer takes also the simplest one! Then just a lot of
them!
Computational graph
• NN is described with a relatively informal graph
language
• More precise computational graph to describe the
backprop algorithms
Structured probabilistic models,
Graphical models, and factor graph
• A structured probabilistic model is a way of
describing a probability distribution.
• Structured probabilistic models are referred to as
graphical models
• Factor graphs are another way of drawing undirected
models that resolve an ambiguity in the graphical
representation of standard undirected model syntax.

More Related Content

PPTX
cnn.pptx
PPTX
cnn.pptx Convolutional neural network used for image classication
PPTX
Java and Deep Learning
PDF
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
PPTX
Java and Deep Learning (Introduction)
PDF
CS229_MachineLearning_notes.pdfkkkkkkkkkk
PDF
machine learning notes by Andrew Ng and Tengyu Ma
PPTX
Unit-1 Introduction and Mathematical Preliminaries.pptx
cnn.pptx
cnn.pptx Convolutional neural network used for image classication
Java and Deep Learning
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
Java and Deep Learning (Introduction)
CS229_MachineLearning_notes.pdfkkkkkkkkkk
machine learning notes by Andrew Ng and Tengyu Ma
Unit-1 Introduction and Mathematical Preliminaries.pptx

Similar to Introduction to Convolutional Neural Network.pptx (20)

PPTX
Deep Learning Module 2A Training MLP.pptx
PPT
SVM (2).ppt
PPTX
super vector machines algorithms using deep
PPT
Pattern Recognition- Basic Lecture Notes
PPT
Pattern Recognition and understanding patterns
PPT
PatternRecognition_fundamental_engineering.ppt
PPT
PPT
slides
PPT
slides
PPT
i i believe is is enviromntbelieve is is enviromnt7.ppt
PPTX
Machine learning introduction lecture notes
PPTX
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
PPTX
Image Recognition of recognition pattern.pptx
PPT
PPT-3.ppt
PPT
MAchine learning
PPT
Machine Learning Machine Learnin Machine Learningg
PPTX
The world of loss function
PDF
Perceptrons (D1L2 2017 UPC Deep Learning for Computer Vision)
PPTX
How Machine Learning Helps Organizations to Work More Efficiently?
Deep Learning Module 2A Training MLP.pptx
SVM (2).ppt
super vector machines algorithms using deep
Pattern Recognition- Basic Lecture Notes
Pattern Recognition and understanding patterns
PatternRecognition_fundamental_engineering.ppt
slides
slides
i i believe is is enviromntbelieve is is enviromnt7.ppt
Machine learning introduction lecture notes
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Image Recognition of recognition pattern.pptx
PPT-3.ppt
MAchine learning
Machine Learning Machine Learnin Machine Learningg
The world of loss function
Perceptrons (D1L2 2017 UPC Deep Learning for Computer Vision)
How Machine Learning Helps Organizations to Work More Efficiently?
Ad

Recently uploaded (20)

PDF
Structs to JSON How Go Powers REST APIs.pdf
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Digital Logic Computer Design lecture notes
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Structs to JSON How Go Powers REST APIs.pdf
Embodied AI: Ushering in the Next Era of Intelligent Systems
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Arduino robotics embedded978-1-4302-3184-4.pdf
Foundation to blockchain - A guide to Blockchain Tech
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Lecture Notes Electrical Wiring System Components
Digital Logic Computer Design lecture notes
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Mechanical Engineering MATERIALS Selection
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
bas. eng. economics group 4 presentation 1.pptx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Ad

Introduction to Convolutional Neural Network.pptx

  • 1. What is computer vision? • The search of the fundamental visual features, and the two fundamentals applications of reconstruction and recognition – Features, maps a 2D image into a vector or a point x – Recognition – Reconstruction
  • 2. 2012 • The current mania and euphoria of the AI revolution – 2012, annual gathering, an improvement of 10% (from 75 to 85) • Computer vision researchers use machine learning techniques to recognize objects in large amount of images – Go back to 1998 (1.5 decade!) • Textual (hand-written and printed) is actually visual! – And why wait for so long? • A silent Hardware revolution: GPU • Sadly driven by video gaming  – Nvidia (GPU maker) is now in the driving seat of this AI revolution! • 2016, AlphaGo beats professionals • A narrow AI program • Re-shaping AI, Computer Science, digital revolution …
  • 3. Visual matching, and recognition for understanding • Finding the visually similar things in different images --- Visual similarities • Visual matching, find the ‘same’ thing under different viewpoints, better defined, no semantics per se. • Visual recognition, find the pre-trained ‘labels’, semantics – We define ‘labels’, then ‘learn’ from labeled data, finally classify ‘labels’
  • 4. The state-of-the-art of visual classification and recognition • Any thing you can clearly define and label • Then show a few thousands examples (labeled data) of this thing to the computer • A computer recognizes a new image, not seen before, now as good as humans, even better! • This is done by deep neural networks.
  • 5. References  In the notes.pptx, some advanced topics  In talks, some vision history  CNN for Visual Recognition, Stanford http://cs231n.github.io/  Deep Learning Tutorial, LeNet, Montreal http://www.deeplearning.net/tutorial/mlp.html  Deep Learning, Goodfellow, Bengio, and Courville  Pattern Recognition and Machine Learning, Bishop  Sparse and Redundant Representations, Elad  Pattern Recognition and Neural Networks, Ripley  Pattern Classification, Duda, Hart, different editions  A wavelet tour of signal processing, a sparse way, Mallat  Introduction to applied mathematics, Strang Some figures and texts in the slides are cut/paste from these references.
  • 7. 7 prod. dot with n n A E  inf. at pts n n  R P n A n R    Naturally everything starts from the known vector space • add two vectors • multiply any vector by any scalar • zero vector – origin • finite basis Vector spaces and geometries
  • 8. 8 • Vector space to affine: isomorph, one-to-one • vector to Euclidean as an enrichment: scalar prod. • affine to projective as an extension: add ideal elements Pts, lines, parallelism Angle, distances, circles Pts at infinity
  • 9. 9 Numerical computation and Linear algebra • Gaussian elimination • LU decomposition • Choleski (sym positive) LL^T • orthogonal decomposition • QR (Gram-Schmidt) • SVD (the high(est)light of linear algebra!) T V U A    n m • row space: first Vs • null space: last Vs • col space: first Us • null space of the trans : last Us
  • 10. 10 Applications • solving equations A x = b, decomposition, SVD • less, constraints, sparse solutions • more, least squares solution, ||Ax – b||^2 is eigendecomposition or S • solving linear equation, also iteratively nonlinear one f(x), or optimi • PCA is SVD
  • 11. 11 High dimensionality discussion --- curse • [0,1]^d •…
  • 13. Image classification and object recognition • Viewing an image as a one dimensional vector of features x, thanks to features! (or simply pixels if you would like), • Image classification and recognition becomes a machine learning application f(x). • What are good features? • Unsupervised, given examples x_i of a random variable x, implicitly or explicitly we learn the distributions p(x), or some properties of p(x) – PCA Principal Components Analysis – K-means clustering • Given a set of training images and labels, we predict labels of a test image. • Supervised, given examples x_i and associated values or labels y_i, we learn to predict a y from a x, usually by estimating p(y|x) – KNN K-Nearest Neighbors • Distance metric and the (hyper) parameter K • Slow by curse of dimension – From linear classifiers to nonlinear neural networks
  • 14. Unsupervised • K-means clustering – Partition n data points into k clusters – NP hard – heuristics • PCA Principal Components Analysis – Learns an orthogonal, linear transformation of the data
  • 16. Supervised • Given a set of training images and labels, we predict labels of a test image. • Supervised learning algorithms – KNN K-Nearest Neighbors • Distance metric and the (hyper) parameter K • Slow by curse of dimension – From linear classifiers to nonlinear neural networks
  • 19. Fundamental binary linear classifiers  Binary linear classifier  The classification surface is a hyper-plane  Decision function could be a nonlinear thresholding d(x_i,wx+b),  Nonlinear distance function, or probability-like sigmoid  Geometry, 3d, and n-d  Linear algebra, linear space  Linear classifiers  The linear regression is the simplest case where we have a close form solution
  • 20. A neuron is a linear classifier • A single neuron is a linear classifier, where the decision function is the activation function • w x + b, a linear classifier, a neuron, – It’s a dot product of two vectors, scalar product – A template matching, a correlation, the template w and the input vector x (or a matched filter) – Also an algebraic distance, not the geometric one which is non-linear (therefore the solution is usually nonlinear!) • The dot product is the metric distance of two points, one data, the other representative • its angular distance, x . w = ||x|| || w|| cos theta, true distance d = ||x-w||^2 = cosine law = ||x||^2 + ||w||^2- 2 ||x|| ||w|| x . x = x .x + w.w – 2 x.x w.w x . w – The ‘linear’ is that the decision surface is linear, a hyper-plane. The decision function is not linear at all
  • 21. A biological neuron and its mathematical model.
  • 22. A very simple old example • The chroma-key segmentation, we want to remove the blue or green background in images • Given an image RGB, alpha R + betta G + gamma B > threshold or 0.0 R + 0.0 G + 1.0 B > 100
  • 23. A linear classifier is not that straightforward • Two things: Inference and Learning • Only inference ‘scoring’ function is linear – We have only one ‘linear’ classifer, but we do have different ways to define the loss to make the learning feasible. The learning is a (nonlinear) optimization problem in which we define a loss function. • Learning is almost never linear! – How to compute or to learn this hyperplane, and how to assess which one is better? To define an objective ‘loss’ function • The classification is ‘discrete’, binary 0, 1, • No ‘analytical’ forms of the loss functions, except the true linear regression with y-y_i • It should be computationally feasible – The logistic regression (though it is a classification, not a regression), converts the output of the linear function into a probability – SVM
  • 24. Activation (nonlinearity) functions • ReLU, max(0,x) • Sigmoid logistic function ), normalized to between 0 and 1, is naturallly probability-like between 0 and 1, – so naturally, sigmoid for two, – (and softmax for N, ) – Activation function and Nonlinearity function, not necessarily logistic sigmoid between 0 and 1, others like tanh (centered), relu, … • Tanh, 2 s(2x) – 1, centered between -1 and 1, better,
  • 26. The rectified linear unit, ReLU • The smoother non-linearities used to be favoured in the past. • Sigmoid, kills gradients, not used any more • At present, ReLU is the most popular. • Easy to implement max(0,x) • It learns faster with many layers, accelerates the learning by a factor of 6 (in Alex) • Not smooth at x=0, subderivative is a set for a convex function, left and right derivatives, set to zero for simplicity and sparsity
  • 27. From two to N classes • Classification f : R^n  (1,2,…,n), while Regression f: R^n  R • Multi-class, output a vector function = ( + 𝒙′ 𝑔 𝑾 𝒙 ), where g is the rectified linear. 𝒃 • W x + b, each row is a linear classifier, a neuron
  • 28. The two common linear classifiers, with different loss functions • SVM, uncalibrated score – A hinge loss, the max-margin loss, A loss for j is different from y – Computationally more feasible, leads to convex optimization • Softmax f(*), the normalized exponentials, = ( )= ( ( + )) 𝒚 𝑓 𝒙′ 𝑓 𝑔 𝑾 𝒙 𝒃 – multi-class logistic regression – The scores are the unnormalized log probabilities – the negative log-likelihood loss, then gradient descent – (1,-2,0) -> exp, (2.71,0.14,1) -> ln, (0.7,0.04,0.26) – ½(1,-2,0) = (0.5,-1,0) -> exp, (1.65,0.37,1) ->ln, (0.55,0.12,0.33), more uniform with increasing regularization • They are usually comparable
  • 29. From linear to non-linear classifiers, Multi Layer Perceptrons • Go higher and linear! • find a map or a transform ( ) to make them linear, but 𝒙↦𝜙 𝒙 in higher dimensions – A complete basis of polynomials  too many parameters for the limited training data – Kernel methods, support vector machine, … • Learn the nonlinearity at the same time as the linear classifiers  multilayer neural networks
  • 30. Multi Layer Perceptrons • The first layer • The second layer • One-layer, g^1(x), linear classifiers • Two-layers, one hidden layer --- universal nonlinear approximator • The depth is the number of the layers, n+1, f g^n … g^1 • The dimension of the output vector h is the width of the layer • A N-layer neural network does not count the input layer x • But it does count the output layer f(*). It represents the class scores vector, it does not have an activation function, or the identify activation function.
  • 31. A 2-layer Neural Network, one hidden layer of 4 neurons (or units), and one output layer with 2 neurons, and three inputs. The network has 4 + 2 = 6 neurons (not counting the inputs), [3 x 4] + [4 x 2] = 20 weights and 4 + 2 = 6 biases, for a total of 26 learnable parameters.
  • 32. A 3-layer neural network with three inputs, two hidden layers of 4 neurons each and one output layer. Notice that in both cases there are connections (synapses) between neurons across layers, but not within a layer. The network has 4 + 4 + 1 = 9 neurons, [3 x 4] + [4 x 4] + [4 x 1] = 12 + 16 + 4 = 32 weights and 4 + 4 + 1 = 9 biases, for a total of 41 learnable parameters.
  • 33. Universal approximator • Given a function, there exisits a feedforward network that approximates the function • A single layer is sufficient to represent any function, • but it may be infeasibly large, • and it may fail to learn and generalize correctly • Using deeper models can reduce the number of units required to represent the function, and can reduce the amount of generalization error
  • 34. Functional summary  The input , the output y  The hidden features  The hidden layers is a nonlinear transformation  provides a set of features describing  Or provides a new representation for  The nonlinear is compositional  Each layer ), g(z)=max(0,z).  The output
  • 35. MLP is not new, but deep feedforward neural network is modern! • Feedforward, from x to y, no feedback • Networks, modeled as a directed acyclic grpah for the composition of functions connected in a chain, f(x) = f^(i)( … f^3(f^2(f^1(x)))) • The depth of the network is N • ‘deep learning’ as N is increasing.
  • 36. Forward inference f(x) and backward learning nabla f(x) • A (parameterized) score function f(x,w) mapping the data to class score, forward inference, modeling • A loss function (objective) L measuring the quality of a particular set of parameters based on the ground truth labels • Optimization, minimize the loss over the parameters with a regularization, backward learning
  • 37. The dataset of pairs of (x,y) is given and fixed. The weights start out as random numbers and can change. During the forward pass the score function computes class scores, stored in vector f. The loss function contains two components: The data loss computes the compatibility between the scores f and the labels y. The regularization loss is only a function of the weights. During Gradient Descent, we compute the gradient on the weights (and optionally on data if we wish) and use them to perform a parameter update during Gradient Descent.
  • 38. Cost or loss functions • Usually the parametric model f(x,theta) defines a distribution p(y | x; theta) and we use the maximum likelihood, that is the cross-entropy between the training data y and the model’s predictions f(x,theta) as the loss or the cost function. – The cross-entropy between a ‘true’ distribution p and an estimated distribution q is H(p,q) = - sum_x p(x) log q(x) • The cost J(theta) = - E log p (y|x), if p is normal, we have the mean squared error cost J = ½ E ||y-f(x; theta)||^2+const • The cost can also be viewed as a functional, mapping functions to real numbers, we are learning functions f parameterized by theta. By calculus of variations, f(x) = E_* [y] • The SVM loss is carefully designed and special, hinge loss, max margin loss, • The softmax is the cross-entropy between the estimated class probabilities e^y_i / sum e and the true class labels, also the negative log likelihood loss L_i = - log (e^y_i / sum e)
  • 39. MLE, likelihood and probabity discussions KL, information theory, ….
  • 40. Gradient-based learning • Define the loss function • Gradient-based optimization with chain rules • z=f(y)=f(g(x)), dz/dx=dz/dy dy/dx • In vectors and , the gradient , where J is the Jocobian matrix of g • In tensors, back-propagation • Analytical gradients are simple – d max(0,x)/d x = 1 or 0, d sigmoid(x)/d x = (1 – sig) sig • Use the centered difference f(x+h)-f(x-h)/2h, error order of O(h^2)
  • 41. Stochastic gradient descent • min loss(f(x;theta)), function of parameters theta, not x • min f(x) • Gradient descent or the method of steepest descent x’ = x - epsilon nabla_x f(x) • Gradient descent, follows the gradient of an entire training set downhill • Stochastic gradient descent, follows the gradient of randomly selected minbatches downhill
  • 44. Regularization  loss term + lambda regularization term  Regularization as constraints for underdetermined system  Eg. A^T A + alpha I, Solving linear homogeneous Ax = 0, ||Ax||^2 with ||x||^2=1  Regularization for the stable uniqueness solution of ill-posed problems (no unique solution)  Regularization as prior for MAP on top of the maximum likelihood estimation, again all approximations to the full Bayesian inference  L1/2 regularization  L1 regularization has ‘feature selection’ property, that is, it produces a sparse vector, setting many to zeros  L2 is usually diffuse, producing small numbers.  L2 superior is not explicitly selecting features
  • 45. Architecture prior and inductive biases discussions ...
  • 46. Hyperparameters and validation • The hyper-parameter lambda • Split the training data into two disjoint subsets – One is to learn the parameters – The other subset, the validation set, is to guide the selection of the hyperparameters
  • 47. A linearly separable toy example • The toy spiral data consists of three classes (blue, red, yellow) that are not linearly separable. – 300 pts, 3 classes • Linear classifier fails to learn the toy spiral dataset. • Neural Network classifier crushes the spiral dataset. – One hidden layer of width 100
  • 48. A toy example from cn231n • The toy spiral data consists of three classes (blue, red, yellow) that are not linearly separable. – 3 classes, 100 pts for each class • Softmax linear classifier fails to learn the toy spiral dataset. – One layer, W, 2*3, b, – analytical gradients, 190 iteration, loss from 1.09 to 0.78, 48% training set accuracy • Neural Network classifier crushes the spiral dataset. – One hidden layer of width 100, W1, 2*100, b1, W2, 100*3, only few extra lines of python codes! – 9000 iteration, loss from 1.09 to 0.24, 98% training set accuracy
  • 49. Generalization • The ‘optimization’ reduces the training errors (or residual errors before) • The ‘machine learning’ wants to reduce the generalization error or the test error as well. • The generalization error is the expected value of the error on a new input, from the test set. – Make the training error small. – Make the gap between training and test error small. • Underfitting is not able to have sufficiently low error on the training set • Overfitting is not able to narrow the gap between the training and the test error.
  • 50. The training data was generated synthetically, by randomly sampling x values and choosing y deterministically by evaluating a quadratic function. (Left) A linear function fit to the data suffers from underfitting. (Center) A quadratic function fit to the data generalizes well to unseen points. It does not suffer from a significant amount of overfitting or underfitting. (Right) A polynomial of degree 9 fit to the data suffers from overfitting. Here we used the Moore-Penrose pseudoinverse to solve the underdetermined normal equations. The solution passes through all of the training points exactly, but we have not been lucky enough for it to extract the correct structure.
  • 51. The capacity of a model • The old Occam’s razor. Among competing ones, we should choose the “simplest” one. • The modern VC Vapnik-Chervonenkis dimension. The largest possible value of m for which there exists a training set of m different x points that the binary classifier can label arbitrarily. • The no-free lunch theorem. No best learning algorithms, no best form of regularization. Task specific.
  • 52. Practice: learning rate Left: A cartoon depicting the effects of different learning rates. With low learning rates the improvements will be linear. With high learning rates they will start to look more exponential. Higher learning rates will decay the loss faster, but they get stuck at worse values of loss (green line). This is because there is too much "energy" in the optimization and the parameters are bouncing around chaotically, unable to settle in a nice spot in the optimization landscape. Right: An example of a typical loss function over time, while training a small network on CIFAR-10 dataset. This loss function looks reasonable (it might indicate a slightly too small learning rate based on its speed of decay, but it's hard to say), and also indicates that the batch size might be a little too low (since the cost is a little too noisy).
  • 53. Practice: avoid overfitting The gap between the training and validation accuracy indicates the amount of overfitting. Two possible cases are shown in the diagram on the left. The blue validation error curve shows very small validation accuracy compared to the training accuracy, indicating strong overfitting. When you see this in practice you probably want to increase regularization or collect more data. The other possible case is when the validation accuracy tracks the training accuracy fairly well. This case indicates that your model capacity is not high enough: make the model larger by increasing the number of parameters.
  • 54. Why deeper? • Deeper networks are able to use far fewer units per layer and far fewer parameters, as well as frequently generalizing to the test set • But harder to optimize! • Choosing a deep model encodes a very general belief that the function we want to learn involves composition of several simpler functions. Or the learning consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation.
  • 55. • The core idea in deep learning is that we assume that the data was generated by the composition factors or features, potentially at multiple levels in a hierarchy. • This assumption allows an exponential gain in the relationship between the number of examples and the number of regions that can be distinguished. • The exponential advantages conferred by the use of deep, distributed representations counter the exponential challenges posed by the curse of dimensionality. Curse of dimensionality
  • 56. Convolutional Neural Networks, or CNN, a visual network, and back to a 2D lattice from the abstraction of the 1D feature vector x in neural networks
  • 57. From a regular network to CNN • We have regarded an input image as a vector of features, x, by virtue of feature detection and selection, like other machine learning applications to learn f(x) • We now regard it as is, a 2D image, a 2D grid, a topological discrete lattice, I(I,j) • Converting input images into feature vector loses the spatial neighborhood- ness • complexity increases to cubics • Yet, the connectivities become local to reduce the complexity!
  • 58. What is a convolution? • The fundamental operation, the convolution I * K(I,j) = sum sum • Flipping kernels makes the convolution commutative, which is fundamental in theory, but not required in NN to compose with other functions, so to include discrete correlations as well into “convolution” • Convolution is a linear operator, dot-product like correlation, not a matrix multiplication, but can be implemented as a sparse matrix multiplication, to be viewed as an affine transform
  • 59. A CNN arranges its neurons in three dimensions (width, height, depth). Every layer of a CNN transforms the 3D input volume to a 3D output volume. In this example, the red input layer holds the image, so its width and height would be the dimensions of the image, and the depth would be 3 (Red, Green, Blue channels). A regular 3-layer Neural Network.
  • 60. LeNet: a layered model composed of convolution and subsampling operations followed by a holistic representation and ultimately a classifier for handwritten digits. [ LeNet ] Convolutional Neural Networks: 1998. Input 32*32. CPU
  • 61. AlexNet: a layered model composed of convolution, subsampling, and further operations followed by a holistic representation and all-in-all a landmark classifier on ILSVRC12. [ AlexNet ] + data + gpu + non-saturating nonlinearity + regularization Convolutional Neural Networks: 2012. Input 224*224*3. GPU.
  • 62. LeNet vs AlexNet • 32*32*1 • 7 layers • 2 conv and 4 classification • 60 thousand parameters • Only two complete convolutional layers – Conv, nonlinearities, and pooling as one complete layer • About 17 k parameters • (from 1989 LeNet 1 of about 10 k parameters to 1998 LeNet) • 224*224*3 • 8 layers • 5 conv and 3 fully classification • 5 convolutional layers, and 3,4,5 stacked on top of each other • Three complete conv layers • 60 million parameters, insufficient data • Data augmentation: – Patches (224 from 256 input), translations, reflections – PCA, simulate changes in intensity and colors
  • 63. The motivation of convolutions • Sparse interaction, or Local connectivity. – The receptive field of the neuron, or the filter size. – The connections are local in space (width and height), but always full in depth – A set of learnable filters • Parameters sharing, the weights are tied • Equivariant representation, translation invariant
  • 64. Convolution and matrix multiplication • Discrete convolution can be viewed as multiplication by matrix • The kernel is a doubly block circulant matrix • It is very sparse!
  • 65. VisGraph, HKUST The ‘convolution’ operation • The convolution is commutative because we have flipped the kernel – Many implement a cross-correlation without flipping • A convolution can be defined for 1, 2, 3, and N D – The 2D convolution is different from a real 3D convolution, which integrates the spatio- temporal information, the standard CNN convolution has only ‘spatial’ spreading • In CNN, even for 3D RGB input images, the standard convolution is 2D in each channel, – each channel has a different filter or kernel, the convolution per channel is then summed up in all channels to produce a scalar for non-linearity activation – The filiter in each channel is not normalized, so no need to have different linear combination coefficients. – 1*1 convolution is a dot product in different channel, a linear combination of different chanels • The backward pass of a convolution is also a convolution with spatially flipped filters.
  • 66. VisGraph, HKUST The convolution layers • Stacking several small convolution layers is different from convolution cascating – As each small convolution is followed by the nonlinearities ReLU – The nonlinearities make the features more expressive! – Have fewer parameters with small filters, but more memory. – Cascating simply enlarges the spatial extent, the receptive field • Whether each conv layer is also followed by a pooling? – Lenet does not! – AlexNet first did not.
  • 67. VisGraph, HKUST The Pooling Layer • Reduce the spatial size • Reduce the amount of parameters • Avoid over-fitting • Backpropagation for a max: only routing the gradient to the input that had the highest value in the forward pass • It is unclear whether the pooling is essential.
  • 68. Pooling layer down-samples the volume spatially, independently in each depth slice of the input volume. Left: the input volume of size [224x224x64] is pooled with filter size 2, stride 2 into output volume of size [112x112x64]. Notice that the volume depth is preserved. Right: The most common down-sampling operation is max, giving rise to max pooling, here shown with a stride of 2. That is, each max is taken over 4 numbers (little 2x2 square).
  • 69. The spatial hyperparameters • Depth • Stride • Zero-padding
  • 70. AlexNet 2012 • A strong prior has very low entropy, e.g. a Gaussian with low variance • An infinitely strong prior says that some parameters are forbidden, and places zero probability on them • The convolution ‘prior’ says the identical and zero weights • The pooling forces the invariance of small translations
  • 71. The convolution and pooling act as an infinitely strong prior! • A strong prior has very low entropy, e.g. a Gaussian with low variance • An infinitely strong prior says that some parameters are forbidden, and places zero probability on them • The convolution ‘prior’ says the identical and zero weights • The pooling forces the invariance of small translations
  • 72. The neuroscientific basis for CNN • The primary visual cortex, V1, about which we know the most • The brain region LGN, lateral geniculate nucleus, at the back of the head carries the signal from the eye to V1, a convolutional layer captures three aspects of V1 – It has a 2-dimensional structure – V1 contains many simple cells, linear units – V1 has many complex cells, corresponding to features with shift invariance, similar to pooling • When viewing an object, info flows from the retina, through LGN, to V1, then onward to V2, then V4, then IT, inferotemporal cortex, corresponding to the last layer of CNN features • Not modeled at all. The mammalian vision system develops an attention mechanism – The human eye is mostly very low resolution, except for a tiny patch fovea. – The human brain makes several eye movements saccades to salient parts of the scene – The human vision perceives 3D • A simple cell responds to a specific spatial frequency of brightness in a specific direction at a specific location --- Gabor-like functions
  • 73. Receptive field Left: An example input volume in red (e.g. a 32x32x3 CIFAR-10 image), and an example volume of neurons in the first Convolutional layer. Each neuron in the convolutional layer is connected only to a local region in the input volume spatially, but to the full depth (i.e. all color channels). Note, there are multiple neurons (5 in this example) along the depth, all looking at the same region in the input - see discussion of depth columns in text below. Right: The neurons from the Neural Network chapter remain unchanged: They still compute a dot product of their weights with the input followed by a non-linearity, but their connectivity is now restricted to be local spatially.
  • 77. CNN architectures and algorithms
  • 78. CNN architectures • The conventional linear structure, linear list of layers, feedforward • Generally a DAG, directed acyclic graph • ResNet simply adds back • Different terminology: complex layer and simple layer – A complex (complete) convolutional layer, including different stages such as convolution per se, batch normalization, nonlinearity, and pooling – Each stage is a layer, even there are no parameters • The traditional CNNs are just a few complex convolutional layers to extract features, then are followed by a softmax classification output layer • Convolutional networks output a high-dimensional, structured object, rather than just predicting a class label for a classiciation task or a real valuefor a regression task, it it an output tensor – S_i,j,k is the probability that pixel j,k belongs to class I
  • 79. The popular CNN • LeNet, 1998 • AlexNet, 2012 • VGGNet, 2014 • ResNet, 2015
  • 80. VGGNet • 16 layers • Only 3*3 convolutions • 138 million parameters
  • 82. Computational complexity • The memory bottleneck • GPU, a few GB
  • 83. Stochastic Gradient Descent • Gradient descent, follows the gradient of an entire training set downhill • Stochastic gradient descent, follows the gradient of randomly selected minbatches downhill
  • 84. The dropout regularization • Randomly shutdown a subset of units in training • It is a sparse representation • It is a different net each time, but all nets share the parameters – A net with n units can be seen as a collection of 2^n possible thinned nets, all of which share weights. – At test time, it is a single net with averaging • Avoid over-fitting
  • 85. Ensemble methods • Bagging (bootstrap aggregating), model averaging, ensemble methods – Model averaging outperforms, with increased computation and memory – Model averaging is discouraged when benchmarking for publications • Boosting, an ensemble method, constructs an ensemble with higher capacity than the individual models
  • 86. Batch normalization • After convolution, before nonlinearity • ‘Batch’ as it is done for a subset of data • Instead of 0-1 (zero mean unit variance) input, the distribution will be learnt, even undone if necessary • The data normalization or PCA/whitening is common in general NN, but in CNN, the ‘normalization layer’ has been shown to be minimal in some nets as well.
  • 87. Open questions • Networks are non-convex – Need regularization • Smaller networks are hard to train with local methods – Local minima are bad, in loss, not stable, large variance • Bigger ones are easier – More local minima, but better, more stable, small variance • As big as the computational power, and data!
  • 88. Local minima, saddle points, and plateaus? • We don’t care about finding the exact mimimum, we only want to obtain good generalization error by reducing the function value. • In low-dimensional, local minima are common. • In higher dimensional, local minima are rare, and saddle points are more common. – The Hessian at a local mimimum has all positive eigenvalues. The Hessian at a saddle point has a micture of pos and neg eigenvalues. – It is exponentially unlikely that all n will have the same sign in high n-dim
  • 90. CNN applications • Transfer learning • Fine-tuning the CNN – Keep some early layers • Early layers contain more generic features, edges, color blobs • Common to many visual tasks – Fine-tune the later layers • More specific to the details of the class • CNN as feature extractor – Remove the last fully connected layer – A kind of descriptor or CNN codes for the image – AlexNet gives a 4096 Dim descriptor
  • 91. VisGraph, HKUST CNN classification/recognition nets • CNN layers and fully-connected classification layers • From ResNet to DenseNet – Densely connected – Feature concatenation
  • 92. Fully convolutional nets: semantic segmentation • Classification/recognition nets produce ‘non-spatial’ outputs – the last fully connected layer has the fixed dimension of classes, throws away spatial coordinates • Fully convolutional nets output maps as well
  • 94. Using sliding windows for semantic segmentation
  • 97. VisGraph, HKUST Detection and segmentation nets: The Mask Region-based CNN (R-CNN): • Class-independent region (bounding box) proposals – From selective search to region proposal net with objectness • Use CNN to class each region • Regression on the bounding box or contour segmentation • Mask R-CNN: end-to-end – Use CNN to make proposals on object/non-object in parallel • The old good idea of face detection by Viola – Proposal generation – Cascading (ada boosting)
  • 98. Using sliding windows for object detection as classification
  • 101. End.
  • 102. VisGraph, HKUST Some old notes in 2017 and 2018
  • 103. Fundamentally from continuous to discrete views … from geometry to recognition • ‘Simple’ neighborhood from topology • discrete high order • Modeling higher-order discrete, but yet solving it with the first-order differentiable optimization • Modeling and implementation become easier • The multiscale local jet, a hierarchical and local characterization of the image in a full scale-space neighborhood
  • 104. VisGraph, HKUST Local jets? Why? • The multiscale local jet, a hierarchical and local characterization of the image in a full scale-space neighborhood • A feature in CNN is one element of the descriptor
  • 105. Classification vs regression • Regression predicts a value from a continuous set, a real number/continuous, – Given a set of data, find the relationship (often, the continuous mathematical functions) that represent the set of data – To predict the output value using training data • Whereas classification predicts the ‘belonging’ to the class, a discrete/categorial variable – Given a set of data and classes, identify the class that the data belongs to – To group the output into a class
  • 106. Autoencoder and decoder • Compression, and reconstruction • Convolutional autoencoder, trained to reconstruct its input – Wide and thin (RGB) to narrow and thick • ….
  • 107. Convolution and deconvolution • The convolution is not inversible, so there is no strict definition, or the closed-form of the so-called ‘deconvolution’ • In iterative procedure, a kind of the convolution transpose is applied, so to call it ‘ deconvolution’ • The ‘deconvolution’ filters are reconstruction bases
  • 108. CNN as a natural features and descriptors • Each point is an interest point or feature point with high-dimensional descriptors • Some of them are naturally ‘unimportant’, and weighted down by the nets • The spatial information is kept through the nets • The hierarchical description is natural from local to global for each point, each pixel, and each feature point
  • 109. Traditional stereo vs deep stereo regression
  • 110. VisGraph, HKUST CNN regression nets: deep regression stereo • Regularize the cost volume.
  • 111. Traditional stereo • Input image H * W * C • (The matching) cost volume in disparities D, or in depths • D * H * W • The value d_i for each D is the matching cost, or the correlation score, or the accumulated in the scale space, for the disparity i, or the depth i. • Disparities are a function of H and W, d = c(x,y;x+d,y+d) • Argmin D • H * W
  • 112. End-to-end deep stereo regression • Input image H * W * C • 18 CNN • H * W * F • (F, features, are descriptor vectors for each pixel, we may just correlate or dot-product two descriptor vectors f1 and f2 to produce a score in D*H*W. But F could be further redefined in successive convolution layers.) • Cost volume, for each disparity level • D * H * W * 2F • 4D volume, viewed as a descriptor vector 2F for each voxel D*H*W • 3D convolution on H, W, and D • 19-37 3D CNN • The last one (deconvolution) turns F into a scalar as a score • D * H * W • Soft argmin D • H * W
  • 113. Bayesian decision  posterior = likelihood * prior / evidence  Decide if > ; otherwise decide
  • 115. VisGraph, HKUST Reinforcement learning (RL) • Dynamic programming uses a mathematical model of the environment, which is typically formulated as a Markov Decision Process • The main difference between the classical dynamic programming and RL is that RL does not assume an exact mathematical model of the Markov Decision Process, and target large MDP where exact methods become infeasible • Therefore, RL is neuro dynamic programming. In operational research and control, it’s called approximate dynamic programming
  • 116. 116 Non-linear iterative optimisation • J d = r from vector F(x+d)=F(x)+J d • minimize the square of y-F(x+d)=y-F(x)-J d = r – J d • normal equation is J^T J d = J^T r (Gauss-Newton) • (H+lambda I) d = J^T r (LM) Note: F is a vector of functions, i.e. min f=(y-F)^T(y-F)
  • 117. 117 General non-linear optimisation • 1-order , d gradient descent d= g and H =I • 2-order, • Newton step: H d = -g • Gauss-Newton for LS: f=(y-F)^T(y-F), H=J^TJ, g=-J^T r • ‘restricted step’, trust-region, LM: (H+lambda W) d = -g R. Fletcher: practical method of optimisation f(x+d) = f(x)+g^T d + ½ d^T H d Note: f is a scalar valued function here.
  • 118. 118 statistics • ‘small errors’ --- classical estimation theory • analytical based on first order appoxi. • Monte Carlo • ‘big errors’ --- robust stat. • RANSAC • LMS • M-estimators Talk about it later
  • 119. An abstract view  The input  The classification with linear models,  Can be a SVM  The output layer of the linear softmax classifier  find a map or a transform to make them linear, but in higher dimensions  provides a set of features describing  Or provides a new representation for  The nonlinear transformation  Can be hand designed kernels in svm  It is the hidden layer of a feedforward network  Deep learning  To learn  is compositional, multiple layers  CNN is convolutional for each layer
  • 120. VisGraph, HKUST • Add drawing for ‘receptive fields’ • Dot product for vectors, convolution, more specific for 2D? Time invariant, or translation invariant, equivariant • Convolution, then nonlinear acitivation, also called ‘detection stage’, detector • Pooling is for efficiency, down-sampling, and handling inputs of variable sizes, To do
  • 121. Cost or loss functions - bis • Classification and regression are different! for different application communities • We used more traditional regression, but the modern one is more on classification, so resulting different loss considerations • Regression is harder to optimize, while classification is easier, and more stable • Classification is easier than regression, so always discretize and quantize the output, and convert them into classification tasks! • One big improvement in NN modern development is that the cross-entropy dominates the mean squred error L2, as the mean squared error was popular and good for regression, but not that good for NN. because of its fundamental more appropriate distribution assumption, not normal distributions • L2 for regression is harder to optimize than the more stable softmax for classification • L2 is also less robust
  • 122. Automatic differentiation (algorithmic diff), and backprop, its role in the development • Differentiation: symbolic or numerical (finite differences) • Automatic differentiation is to compute derivatives algorithmically, backprop is only one approach to it. • Its history is related to that of NN and deep learning • Worked for traditional small systems, a f(x) • Larger and more explicit composional nature of f1(f2(f3(… fn(x)))) goes back to the very nature of the derivatives – Forward mode and reverse mode (it is based on f(a+b epsilon) =f(a)+f’(a)b epsilon, f(g(a+b epsilon) = f(g(a))+f’(g(a))g’(a) b epsilon) – The reverse mode is backprop for NN • In the end, it benefits as well the classical large optimization such as bundle adjustment
  • 123. Viewing the composition of an arbitrary function as a natural layering • Take f(x,y)=x+s(y) / s(x) + (x+y)^2, at a given point x=3,y=-4 • Forward pass • f1=s(y), f2 = x+f1, f3=s(x), f4=x+y, f5=f4^2, f6=f3+f5, f7=1/f6, f8=f2*f7 • So f(*)=f8(f7(f6(f5(f4(f3(f2(f1(*)))))))), each of fn is a known elementary function or operation • Backprop to get (df/dx, df/dy), or abreviated as (dx,dy), at (3,-4) • f8=f, abreviate df/df7 or df8/df7 as df7, df/df7=f2,…, and df/dx as dx, … • df7=f2, (df2=f7), df6= (-1/f6^2) * df7, df5=df6, (df3=df6), df4=(2*f4)df5, dx=df4, dy=df4, dx += (1-s(x)*s(x)*df3 (backprop in s(x)=f3), dx += df2 (backprop in f2), dy += df2 (backprop in f2), dy += (1-s(y))*s(y)*df1 (backprop in s(y)=f1) • In NN, there are just more variables in each layer, but the elementary functions are much simpler: add, multiply, and max. • Even the primitive function in each layer takes also the simplest one! Then just a lot of them!
  • 124. Computational graph • NN is described with a relatively informal graph language • More precise computational graph to describe the backprop algorithms
  • 125. Structured probabilistic models, Graphical models, and factor graph • A structured probabilistic model is a way of describing a probability distribution. • Structured probabilistic models are referred to as graphical models • Factor graphs are another way of drawing undirected models that resolve an ambiguity in the graphical representation of standard undirected model syntax.

Editor's Notes

  • #2: 1587, a year of no significance, ray huang, 万历15 年, 黄仁宇 China: a macro history 中国大历史
  • #5: Now the big trends in visual computing are …
  • #8: Affine and eucl are finite geometry in which we are handling only fnite pts
  • #9: LU for solving simple linear system, Instead of inversing the matrix, It reduces to two triangular system which can be easily computed by forward/backward substitution QR can solve full-rank least squares Then if you don’t know the rank, do it with SVD and pseudo-inverse
  • #10: this means: g is gradient vector, H is the hessian matrix jocobien is for squared model, r^T r …
  • #11: this means: g is gradient vector, H is the hessian matrix jocobien is for squared model, r^T r …
  • #13: K means clustering is an unsupervised learning, whileas KNN is a supervised learning Supervised learning: clustering and PCA
  • #14: K means clustering is an unsupervised learning, whileas KNN is a supervised learning Supervised learning: clustering and PCA
  • #15: K means clustering is an unsupervised learning, whileas KNN is a supervised learning Supervised learning: clustering and PCA
  • #16: K means clustering is an unsupervised learning, whileas KNN is a supervised learning Supervised learning: clustering and PCA
  • #17: K means clustering is an unsupervised learning, whileas KNN is a supervised learning Supervised learning: clustering and PCA
  • #19: Can take the nonlinear distance function to the hyperplane, and interpret this distance function as a probability
  • #20: (If take sigmoid as the activation, then a neuron is a binary softmax classifer) w . x + b > 0 -> w . x > -b, merely a thresholding shift or bias dot product, the inner product, scalar product, euclidean metric, a distance metric, the example w and the input x, its angular distance, x . w = ||x|| || w|| cos theta, true distance d = ||x-w||^2 = cosine law = ||x||^2 + ||w||^2- 2 ||x|| ||w|| x . x = x .x + w.w – 2 x.x w.w x . w Dot product, w is a template, or a prototype
  • #24: Sigmoid, with x1=0, x2=x
  • #26: Sigmoid, with x1=0, x2=x
  • #29: They admit simple algorithms where the form of the nonlinearity can be learned from the training data. They are extremely powerful, have nice theoretical properties, apply well to a vast array of applications.
  • #30: Multi layer percetrons is a subclass of neural networks, is the classical type of nn, and is always feedforward nn. a perceptron is more restrictive, originally. Mostly, MLP is what we all feedforward neural networks, Then we have cnn, and rnn …
  • #36: Can take the nonlinear distance function to the hyperplane, and interpret this distance function as a probability
  • #40: Chain rule of calculus, chai rule of probability
  • #41: Chain rule of calculus, chai rule of probability
  • #44: The expansion of log p typically yields some terms that do not depend on the model parameters and may be discarded. If p is normal, then we recover the mean squred error cost, J = E || y – f (x, theta) ||^2 + c, up to a scaling factor and a term that does not depend on theta.
  • #46: The expansion of log p typically yields some terms that do not depend on the model parameters and may be discarded. If p is normal, then we recover the mean squred error cost, J = E || y – f (x, theta) ||^2 + c, up to a scaling factor and a term that does not depend on theta.
  • #47: Chain rule of calculus, chai rule of probability
  • #49: The expansion of log p typically yields some terms that do not depend on the model parameters and may be discarded. If p is normal, then we recover the mean squred error cost, J = E || y – f (x, theta) ||^2 + c, up to a scaling factor and a term that does not depend on theta.
  • #50: The expansion of log p typically yields some terms that do not depend on the model parameters and may be discarded. If p is normal, then we recover the mean squred error cost, J = E || y – f (x, theta) ||^2 + c, up to a scaling factor and a term that does not depend on theta.
  • #51: The expansion of log p typically yields some terms that do not depend on the model parameters and may be discarded. If p is normal, then we recover the mean squred error cost, J = E || y – f (x, theta) ||^2 + c, up to a scaling factor and a term that does not depend on theta.
  • #78: http://cs231n.github.io/neural-networks-1/
  • #79: http://cs231n.github.io/neural-networks-1/
  • #90: http://cs231n.github.io/neural-networks-1/
  • #91: http://cs231n.github.io/neural-networks-1/
  • #92: http://cs231n.github.io/neural-networks-1/
  • #97: HOG features, Histogram of Gradients, by Triggs
  • #103: http://cs231n.github.io/neural-networks-1/
  • #104: http://cs231n.github.io/neural-networks-1/
  • #110: http://cs231n.github.io/neural-networks-1/
  • #112: Alex Kendall et al, Skydio inc. End-to-end learning of geometry and context for deep stereo regression 2017
  • #113: Now the big trends in visual computing are …
  • #114: Now the big trends in visual computing are …
  • #116: this means: g is gradient vector, H is the hessian matrix jocobien is for squared model, r^T r …
  • #117: this means: g is gradient vector, H is the hessian matrix jocobien is for squared model, r^T r …
  • #119: They admit simple algorithms where the form of the nonlinearity can be learned from the training data. They are extremely powerful, have nice theoretical properties, apply well to a vast array of applications.
  • #122: Thru cs231n, https://arxiv.org/abs/1502.05767, https://arxiv.org/abs/1502.05767
  • #123: See Stanford cs231n python example
  • #124: http://www.ams.org/publicoutreach/feature-column/fc-2017-12