1. Deep Learning
Convolutional and Pooling Layers
Dr. Ahsen Tahir
.The slides in part have been modified from Ian Good Fellow book slides and Alexâs Dive in to Deep Learning book slides
3. Classifying Dogs and Cats in Images
⢠Use a good camera
⢠RGB image has 36M elements
⢠The model size of a single hidden
layer MLP with a 100 hidden size
is 3.6 Billion parameters
⢠Exceeds the population of dogs
and cats on earth
(900M dogs + 600M cats)
4. Flashback - Network with one hidden layer
36M features
100 neurons
h = Ď
(Wx + b
)
3.6B parameters = 14GB
16. Idea #1 - Translation Invariance
⢠A shift in x also leads to a shift in h
⢠v should not depend on (i,j). Fix via vi, j,a,b= v
a,b
hi, j=â
a,b
va,b
xi+a,j+b
hi, j=â
a,b
vi, j,a,b
xi+a,j+b
Thatâs a 2-D convolution
cross-correlation
17. Idea #2 - Locality
⢠We shouldnât look very far from x(i,j) in order to assess
whatâs going on at h(i,j)
⢠Outside range parameters vanish
hi, j=
â
a,b
va,bxi+a,j+b
|a|,|b| > Î va,b= 0
hi, j=
Î
â
a=âÎ
Î
â
b=âÎ
va,b xi+a,j+b
18. 2-D Convolution Layer
⢠input matrix
⢠kernel matrix
⢠b: scalar bias
⢠output matrix
⢠W and b are learnable parameters
Y = X â W + b
X : nh
Ă nw
W : kh
Ă kw
Y : (n
h
â kh
+ 1) Ă (n
w
â k
w
+ 1)
24. Cross Correlation vs Convolution
⢠2-D Cross Correlation
⢠2-D Convolution
⢠No difference in practice during to symmetry
yi, j=
h
â
a=1
w
â
b=1
w
a,b
xi+a,j+b
yi, j=
h
â
a=1
w
â
b=1
wâa,âb
x
i+a,j+b
25. 1-D and 3-D Cross Correlations
yi =
h
â
a=1
waxi+a yi, j,k=
h
â
a=1
w
â
b=1
d
â
c=1
w
a,b,c
x
i+a,j+b,k+c
⢠1-D
⢠Text
⢠Voice
⢠Time series
⢠3-D
⢠Video
⢠Medical images
28. Padding
⢠Given a 32 x 32 input image
⢠Apply convolutional layer with 5 x 5 kernel
⢠28 x 28 output with 1 layer
⢠4 x 4 output with 7 layers
⢠Shape decreases faster with larger kernels
⢠Shape reduces from to
n
h
Ă n
w
(nh
â kh
+ 1) Ă (nw
â k
w
+ 1)
30. Padding
⢠If Padding
⢠A common choice is
(n â k
+ 2p + 1)
p=1 (means zero layer around each side of image)
2p= k â 1
31. Stride
⢠Padding reduces shape linearly with #layers
⢠Given a 224 x 224 input with a 5 x 5 kernel, needs 44
layers to reduce the shape to 4 x 4
⢠Requires a large amount of computation
32. Stride
⢠Stride is the #rows/#columns per slide
Strides of 3 and 2 for height and width
0 Ă 0 + 0 Ă 1 + 1 Ă 2 + 2 Ă 3 = 8
0 Ă 0 + 6 Ă 1 + 0 Ă 2 + 0 Ă 3 = 6
33. Stride
⢠Given stride s, for the height and stride for the width,
the output shape is
⢠With
sh
sw
2p= kâ 1 in n+2p-k+1 â n â n/s
(n
h
/s
h
) Ă (n
w
/s
w
)
(n â k+ 1)
+ 2p
s
â â
35. Multiple Input Channels
⢠Color image may have three RGB channels
⢠Converting to grayscale loses information
36. Multiple Input Channels
⢠Color image may have three RGB channels
⢠Converting to grayscale loses information
37. Multiple Input Channels
⢠Have a kernel for each channel, and then sum results
over channels
(1 Ă 1 + 2 Ă 2 + 4 Ă 3 + 5 Ă 4)
+(0 Ă 0 + 1 Ă 1 + 3 Ă 2 + 4 Ă 3)
= 56
38. Multiple Input Channels
⢠input
⢠kernel
⢠output
X : ci
Ă nh
Ă nw
W : ci
Ă kh
Ă kw
Y : mh
Ă mw
Y =
ci
â
i=0
Xi,:,:â Wi,:,:
39. Multiple Output Channels
⢠No matter how many inputs channels, so far we always
get single output channel
⢠We can have multiple 3-D kernels, each one generates a
output channel
⢠Input
⢠Kernel
⢠Output
X : ci
Ă nh
Ă nw
W : co
Ă ci
Ă kh
Ă kw
Y : co
Ă mh
Ă mw
Yi,:,:= X â W
i,:,:,:
for i = 1,âŚ, co
Tensorflow â Channels Last (default)
Pytorch â Channels First (default)
40. Multiple Input/Output Channels
⢠Each output channel may recognize a particular pattern
⢠Input channels kernels recognize and combines patterns
in inputs
41. 1 x 1 Convolutional Layer
is a popular choice. It doesnât recognize spatial
patterns, but fuse channels.
kh= kw
= 1
42. 2-D Convolution Layer Summary
⢠Input
⢠Kernel
⢠Bias
⢠Output
⢠Complexity (number of floating point operations FLOP)
⢠10 layers, 1M examples: 10PF
(CPU: 0.15 TF = 18h, GPU: 12 TF = 14min)
X : ci
Ă nh
Ă nw
W : co
Ă ci
Ă kh
Ă kw
Y : coĂ mh
Ă mw
Y = X â W + B
B : co
Ă ci
O(c
i
c
o
k
h
k
w
m
h
m
w
)
ci = co= 100
kh= hw= 5
mh= mw
= 64
1GFLOP
44. Pooling
⢠Convolution is sensitive to position
⢠Detect vertical edges
⢠We need some degree of invariance to translation
⢠Lighting, object positions, scales, appearance vary
among images
X Y
0 output with
1 pixel shift
45. 2-D Max Pooling
⢠Returns the maximal value in the
sliding window
max(0,1,3,4) = 4
46. 2-D Max Pooling
⢠Returns the maximal value in the sliding window
Conv output 2 x 2 max pooling
Vertical edge detection
Tolerant to 1
pixel shift
47. Padding, Stride, and Multiple Channels
⢠Pooling layers have similar padding
and stride as convolutional layers
⢠No learnable parameters
⢠Apply pooling for each input channel to
obtain the corresponding output
channel
#output channels = #input channels
48. Average Pooling
⢠Max pooling: the strongest pattern signal in a window
⢠Average pooling: replace max with mean in max pooling
⢠The average signal strength in a window
Max pooling Average pooling
55. LeNet in MXNet
net = gluon.nn.Sequential()
with net.name_scope():
net.add(gluon.nn.Conv2D(channels=20, kernel_size=5, activation='tanh'))
net.add(gluon.nn.AvgPool2D(pool_size=2))
net.add(gluon.nn.Conv2D(channels=50, kernel_size=5, activation='tanh'))
net.add(gluon.nn.AvgPool2D(pool_size=2))
net.add(gluon.nn.Flatten())
net.add(gluon.nn.Dense(500, activation='tanh'))
net.add(gluon.nn.Dense(10))
loss = gluon.loss.SoftmaxCrossEntropyLoss()
(size and shape inference is automatic)
56. courses.d2l.ai/berkeley-stat-157
Summary
⢠Convolutional layer
⢠Reduced model capacity compared to dense layer
⢠Efficient at detecting spatial pattens
⢠High computation complexity
⢠Control output shape via padding, strides and
channels
⢠Max/Average Pooling layer
⢠Provides some degree of invariance to translation