SlideShare a Scribd company logo
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 1/49
Batch normalization 與他愉快的⼩伙伴Batch normalization 與他愉快的⼩伙伴
杜岳華
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 2/49
OutlineOutline
Batch normalization (https://arxiv.org/abs/1502.03167)
Layer normalization (https://arxiv.org/abs/1607.06450)
Recurrent batch normalization (https://arxiv.org/abs/1603.09025)
Group normalization (https://arxiv.org/abs/1803.08494)
How does batch normalization help optimization?
(https://arxiv.org/abs/1805.11604)
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 3/49
E ectE ect
Improve accuracy
Faster learning
Stable
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 4/49
Batch normalization [Google]Batch normalization [Google]
Problem: the distribution of each layer's input changes during training.
Solution: x the distribution of inputs into a subnetwork
Effect: available of high laerning rate, improve training ef ciency
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 5/49
Internal covariate shift (ICS)Internal covariate shift (ICS)
Batch Normalization—What the hey?
(https://gab41.lab41.org/batch-normalization-what-the-hey-
d480039a9e3b)
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 6/49
AssumaptionsAssumaptions
= (x; )h1 F1 Θ1
⋮
= ( ; )hi Fi hi−1 Θi
⋮
y = ( ; )Fk hk−1 Θk
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 7/49
AssumaptionsAssumaptions
= (x; , ) = f( x + )h1 F1 W1 b1 W1 b1
⋮
= ( ; , ) = f( + )hi Fi hi−1 Wi bi Wi hi−1 bi
⋮
y = ( ; , ) = f( + )Fk hk−1 Wk bk Wk hk−1 bk
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 8/49
Batch normalizationBatch normalization
Ideally, calculating over all the training set is the best.
μ =
1
m
∑
i=1
m
xi
= ( − μσ
2
1
m
∑
i=1
m
xi )
2
←xi^
− μxi
+ ϵσ
2
− −−−−
√
← B ( ) = γ + βyi Nγ,β xi^ xi^
γ and β are network parameters.
μ, σ
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 9/49
Batch normalizationBatch normalization
is a constant added to mini-batch variance for numerical stability
each mini-batch produces estimates of the mean and variance
BatchNorm can be added before or after the activation function
ϵ
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 10/49
Batch normalizationBatch normalization
Wx+b f Wx+b
xi
^xi
yi
γ,β
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 11/49
Batch normalizationBatch normalization
Ensure the output statistics of a layer are xed.
Wx+b Wx+b
xi
yi
γ,β
f
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 12/49
TestingTesting
Ideal solution
Compute over all the training set
Practical solution
Compute moving average of of batches during training
μ, σ
μ, σ
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 13/49
Pros and ConsPros and Cons
Advantage
Increase learning rate
Remove dropout and reduce regularization
Remove local response normalization
Regularizer
Disadvantage
Extra computation
Small batch size: no effect
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 14/49
Layer normalization [University of Toronto, G. Hinton]Layer normalization [University of Toronto, G. Hinton]
Problem: BatchNorm is dependent on batch, and is not obvious how to apply to
RNN
Varied length sequence in RNN
Hard to applied to online learning
Solution: transpose the normalization into layer and place it before non-linearity
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 15/49
AssumptionsAssumptions
Feed-forward neural networkFeed-forward neural network
hidden layer
= f( + )h
l+1
W
l
h
l
b
l
l − th
Standard RNNStandard RNN
hidden layer
= f( + + )h
t+1
Wh h
t
Wx x
t
b
l
t − th
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 16/49
Layer normalizationLayer normalization
Compute the layer normalization statistics over all the hidden units in the same layer.
hidden unit
= f( + )h
l+1
i
w
lT
i
h
l
b
l
i
⇒ = , = f( + )a
l
i
w
lT
i
h
l
h
l+1
i
a
l
i
b
l
i
i − th
=μ
l
1
h
∑
i=1
h
a
l
i
=σ
l
( −
1
h
∑
i=1
h
a
l
i
μ
l
)
2
− −−−−−−−−−−−

⎷


2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 17/49
Layer normalizationLayer normalization
All the hidden units in a layer share the same normalization terms
Different training cases have different normlization terms
L ( ) = g ⊙ + bNg,b a
t
−a
t
μ
t
σ
t
g and b are network parameters.
⊙ :  Hadamard product, or element-wise multiply
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 18/49
Layer normalizationLayer normalization
= +a
t
Wh h
t
Wx x
t
= f(L ( ))h
t
Ng,b a
t
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 19/49
Some analysisesSome analysises
Compare the invariance between batch, weight and layer normalization
Geometry of parameter space during training
make learning more stable
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 20/49
Pros and ConsPros and Cons
Advantage
Fast converage
Reduce vanishing gradient problem
Disadvantage
Not suitable for CNN
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 21/49
Recurrent batch normalizationRecurrent batch normalization
problem: limited use in stacked RNN
solution: apply batch normalization to hidden-to-hidden transition
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 22/49
AssumptionsAssumptions
input:
hidden state:
output:
= sigm( + + b)ft Wh ht−1 Wx xt−1
= sigm( + + b)it Wh ht−1 Wx xt−1
= sigm( + + b)ot Wh ht−1 Wx xt−1
= tanh( + + b)gt
Wh ht−1 Wx xt−1
 
= ⊙ + ⊙ct ft ct−1 it gt
= ⊙ tanh( )ht ot ct
xt−1
,ht−1 ct
ht
圖解LSTM (https://brohrer.mcknote.com/zh-
Hant/how_machine_learning_works/how_rnns_lstm_work.html)
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 23/49
AssumptionsAssumptions
xt-1
ct-1,
ht-1
ot-1
xt
ot
ct+1,
ht+1
xt+1
ot+1
LSTM unit
σ σ tanh σ
tanh
ct-1
ht-1
xt
ht
ct
Ft
It
Ot
ht
... ...
Wiki (https://en.wikipedia.org/wiki/Recurrent_neural_network)
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 24/49
Recurrent batch normalizationRecurrent batch normalization
B (x) = γ ⊙ + βNγ,β
x − μ
+ ϵσ
2
− −−−−
√
 
= sigm(B ( ) + B ( ) + b)ft N ,γh
βh
Wh ht−1 N ,γx
βx
Wx xt−1
= sigm(B ( ) + B ( ) + b)it N ,γh
βh
Wh ht−1 N ,γx
βx
Wx xt−1
= sigm(B ( ) + B ( ) + b)ot N ,γh
βh
Wh ht−1 N ,γx
βx
Wx xt−1
= tanh(B ( ) + B ( ) + b)gt
N ,γh
βh
Wh ht−1 N ,γx
βx
Wx xt−1
 
= ⊙ + ⊙ct ft ct−1 it gt
= ⊙ tanh(B ( ))ht ot N ,γc
βc
ct
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 25/49
Group normalization [Facebook AI Research]Group normalization [Facebook AI Research]
problem: BatchNorm's error increase rapidly when the batch size drcrease
CV require small batches constrained by memory comsumption
solution: divide channels into groups and compute within each group the mean
and variance for normalization
2481632
batch size (images per worker)
22
24
26
28
30
32
34
36
error(%)
Batch Norm
Group Norm
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 26/49
Group normalizationGroup normalization
H,W
C N
Batch Norm
H,W
C N
Layer Norm
H,W
C N
Instance Norm
H,W
C N
Group Norm
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 27/49
Group normalizationGroup normalization
結論都差不多,懶得講
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 28/49
How does batch normalization help optimization? [MIT]How does batch normalization help optimization? [MIT]
No, it is not about internal covariate shift!No, it is not about internal covariate shift!
It makes the optimization landscape signi cantly smoother.It makes the optimization landscape signi cantly smoother.
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 29/49
Investigate the connection between ICS and BatchNormInvestigate the connection between ICS and BatchNorm
VGG on CIFAR-10 w/o BatchNormVGG on CIFAR-10 w/o BatchNorm
Dramatic improvement both in terms of optimization and generalization
Difference in distribution stability
0 5k 10k 15k
Steps
50
100
TrainingAccuracy(%)
Standard, LR=0.1
Standard + BatchNorm, LR=0.1
Standard, LR=0.5
Standard + BatchNorm, LR=0.5
0 5k 10k 15k
Steps
50
100
TestAccuracy(%)
Standard, LR=0.1
Standard + BatchNorm, LR=0.1
Standard, LR=0.5
Standard + BatchNorm, LR=0.5
Layer#3
Standard Standard + BatchNorm
Layer#11
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 30/49
QuestionsQuestions
1. Is the effectiveness of BatchNorm indeed related to internal covariate shift?
2. Is BatchNorm's stabilization of layer input distributions even effective in reducing
ICS?
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 31/49
Does BatchNorm's performance stem from controlling ICS?Does BatchNorm's performance stem from controlling ICS?
We train the network with random noise injected after BatchNorm layers.
Each activation for each sample in the batch using i.i.d. noise with non-zero mean
and non-unit variance.
Noise distribution change at each time step.
0 5k 10k 15k
Steps
20
40
60
80
100
TrainingAccuracy
Standard
Standard + BatchNorm
Standard + "Noisy" Batchnorm
Layer#2
Standard Standard +
BatchNorm
Standard +
"Noisy" BatchNorm
Layer#9Layer#13
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 32/49
Is BatchNorm reducing ICS?Is BatchNorm reducing ICS?
Is there a broader notion of ICS that has such a direct link to training
performance?
Attempt to capture ICS from a perspective that is more tied to the underlying
optimization phenomenon.
Measure the difference between the gradients of each layer before and after
updates to all the previous layer.
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 33/49
Is BatchNorm reducing ICS?Is BatchNorm reducing ICS?
internal covariate shift (ICS) as
corresponds to the gradient of the layer parameters
is the same gradient after all the previous layers have been updated.
Re ect the change in the optimization landscape of caused by the changes of its
input.
Def.
|| − |Gt,i G
′
t,i
|
2
= ∇L( , . . . , , . . . , ; , )Gt,i W
(t)
1
W
(t)
i
W
(t)
k
x
(t)
y
(t)
= ∇L( , . . . , , . . . , ; , )G
′
t,i
W
(t+1)
1
W
(t+1)
i
W
(t+1)
k
x
(t)
y
(t)
Gt,i
G
′
t,i
Wi
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 34/49
Is BatchNorm reducing ICS?Is BatchNorm reducing ICS?
(a) VGG
20
40
60
80
100
TrainingAccuracy(%)
LR=0.1LR=0.1
Standard
Standard + BatchNorm
0 5k 10k 15k
Steps
20
40
60
80
100
TrainingAccuracy(%)
LR=0.01LR=0.01
Standard
Standard + BatchNorm
10 2
10 0
2-difference
Layer #5
0
1
CosAngle
Layer #10
10 3
10 1
10 1
2-difference
0 5k 10k 15k
Steps
0
1
CosAngle
0 5k 10k 15k
Steps
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 35/49
Is BatchNorm reducing ICS?Is BatchNorm reducing ICS?
(b) Deep Linear Network
103
104
TrainingLoss
LR=1e-06LR=1e-06
Standard
Standard + BatchNorm
0 5k 10k
Steps
103
104
TrainingLoss
LR=1e-07LR=1e-07
Standard
Standard + BatchNorm
10 2
10 3
10 4
2-Difference
Layer #9
0
1
CosAngle
Layer #17
10 1
10 3
2-Difference
0 5k 10k
Steps
0
1
CosAngle
0 5k 10k
Steps
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 36/49
Is BatchNorm reducing ICS?Is BatchNorm reducing ICS?
Model with BatchNorm have similar, or even worse, ICS
and are almost uncorrelated
Controlling the distributions layer inputs might not even reduce the ICS
Gt,i G
′
t,i
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 37/49
Why doess BatchNorm work?Why doess BatchNorm work?
Is there a more fundamental phenomenon at play here?
It reparametrizes the underlying optimization problem to make its landscape be
signi cantly more smooth.
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 38/49
Landscape smoothnessLandscape smoothness
Loss changes at a smaller rate and the magnitudes of the gradients are smaller too.
0 5k 10k 15k
Steps
100
101
LossLandscape
Standard
Standard + BatchNorm
(a) loss landscape
0 5k 10k 15k
Steps
5
10
15
20
25
30
35
40
45
-smoothness
Standard
Standard + BatchNorm
(b) “effective”β-smoothness
0 5k 10k 15k
Steps
0
50
100
150
200
250
GradientPredictiveness
Standard
Standard + BatchNorm
(c) gradient predictiveness
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 39/49
LipschitznessLipschitzness
Lipschitz continuousLipschitz continuous
A function is Lipschitz continuous,
K: a Lipschitz constant,
the smallest K is the (best) Lipschitz constant
f : X → Y
⇔ ∃K ≥ 0, ∀ , ∈ X,x1 x2
|f( ) − f( )| ≤ K| − |x1 x2 x1 x2
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 40/49
-smoothness-smoothnessβ
-smoothness-smoothness
A function is -smooth
β
f β
⇔ ∇f is β-Lipschitz
⇔ ||∇f( ) − ∇f( )|| ≤ β|| − ||x1 x2 x1 x2
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 41/49
The optimization landscapeThe optimization landscape
0 5k 10k 15k
Steps
100
101
LossLandscape
Standard
Standard + BatchNorm
(a) loss landscape
0 5k 10k 15k
Steps
5
10
15
20
25
30
35
40
45
-smoothness
Standard
Standard + BatchNorm
(b) “effective”β-smoothness
0 5k 10k 15k
Steps
0
50
100
150
200
250
GradientPredictiveness
Standard
Standard + BatchNorm
(c) gradient predictiveness
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 42/49
The optimization landscapeThe optimization landscape
Improve the Lipschitzness of the loss function
BatchNorm's reparametrization leads to gradients of the loss being more Lipschitz
too
the loss exhibits a signi cantly better "effective" -smoothness
Make the gradients more reliable and predictive
β
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 43/49
Theoretical analysisTheoretical analysis
跳過(逃跳過(逃
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 44/49
Is BatchNorm the best (only?) way to smoothen theIs BatchNorm the best (only?) way to smoothen the
landscape?landscape?
Is this smoothening effect a unique feature of BatchNorm?
Study schemes taht x the rst order moment of the activations, as BatchNorm
does.
normalize them by the average of their norm
norm, norm and norm
Lp
L1 L2 L∞
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 45/49
Is BatchNorm the best (only?) way to smoothen theIs BatchNorm the best (only?) way to smoothen the
landscape?landscape?
0 5k 10k
Steps
20
40
60
80
100
TrainingAccuracy(%)
Standard
Standard + BatchNorm
Standard + L 1
Standard + L 2
Standard + L
0 5k 10k
Steps
102
103
104
TrainingLoss
Standard
Standard + BatchNorm
Standard + L 1
Standard + L 2
Standard + L
(a) VGG (b) Deep Linear Model
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 46/49
Is BatchNorm the best (only?) way to smoothen theIs BatchNorm the best (only?) way to smoothen the
landscape?landscape?
Layer#11
Standard Standard + BatchNorm Standard + L 1Norm Standard + L 2Norm Standard + L Norm
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 47/49
Is BatchNorm the best (only?) way to smoothen theIs BatchNorm the best (only?) way to smoothen the
landscape?landscape?
All the normalization strategies offer comparable performance to BatchNorm
For deep linear network, -normalization performs even better than BatchNorm
-normalization leads to larger distributional covariate shift than vanilla network,
yet stiil yield improved optimization performance
l1
lp
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 48/49
ConclusionConclusion
BatchNorm might not even be reducing internal covariate shift.
BatchNorm makes the landscape of the corresponding optimization problem be
signi cantly more smooth.
Provide empirical demostration and theoretical justi cation. (Lipschitzness)
The smoothening effect is not uniquely tied to BatchNorm.
2018/10/21 Batch_normalization slides
http://127.0.0.1:8000/Batch_normalization.slides.html?print-pdf#/ 49/49
Q & AQ & A
Extra papersExtra papers
Understanding Batch Normalization (https://arxiv.org/abs/1806.02375)
Norm matters: ef cient and accurate normalization schemes in deep networks
(https://arxiv.org/abs/1803.01814)
Batch-normalized Recurrent Highway Networks
(https://arxiv.org/abs/1809.10271)
Differentiable Learning-to-Normalize via Switchable Normalization
(https://arxiv.org/abs/1806.10779)

More Related Content

PPTX
PR-134 How Does Batch Normalization Help Optimization?
PDF
Enterprise Systems For Management 2nd Edition Motiwalla Test Bank
PDF
Enterprise Systems For Management 2nd Edition Motiwalla Test Bank
PPTX
autoTVM
PDF
Cube remodelling
PDF
6 panel manual training guide by Tonatiuh Lozada Duarte an excellent method o...
PPTX
GNorm and Rethinking pre training-ruijie
PDF
Performance tuning in sap bi 7.0
PR-134 How Does Batch Normalization Help Optimization?
Enterprise Systems For Management 2nd Edition Motiwalla Test Bank
Enterprise Systems For Management 2nd Edition Motiwalla Test Bank
autoTVM
Cube remodelling
6 panel manual training guide by Tonatiuh Lozada Duarte an excellent method o...
GNorm and Rethinking pre training-ruijie
Performance tuning in sap bi 7.0

Similar to Batch normalization 與他愉快的小伙伴 (20)

PDF
gg_fall_15
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
PDF
MediaEval 2016 - UPMC at MediaEval2016 Retrieving Diverse Social Images Task
PDF
4 h0 002
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
DOCX
Question 1 of 205.0 PointsGiven the size of the product, an ai.docx
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
PDF
Test Bank for Systems Analysis and Design 11th Edition by Tilley
PDF
000 237
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
PDF
A Radical Challenge in Reliability Dynamic Life Test.pdf; Burn-In program Con...
PDF
DY0-001 Certification PDF – Dyson Product Specialist Q&A Guide
PDF
SAP_ Carve- out_ for_Globa l_Rollout.pdf
PDF
WeightWatcher Update: January 2021
PPTX
IE431 Final Presentation
PDF
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Process Capability
PDF
Test Bank for Systems Analysis and Design 11th Edition by Tilley
PDF
Essentials of Database Management 1st Edition Hoffer Test Bank
PDF
Conf42 - AI Augmented Platform Engineering.pdf
PDF
SAP Carve out for Global Rollout.pdf
gg_fall_15
Essentials of Database Management 1st Edition Hoffer Test Bank
MediaEval 2016 - UPMC at MediaEval2016 Retrieving Diverse Social Images Task
4 h0 002
Essentials of Database Management 1st Edition Hoffer Test Bank
Question 1 of 205.0 PointsGiven the size of the product, an ai.docx
Essentials of Database Management 1st Edition Hoffer Test Bank
Test Bank for Systems Analysis and Design 11th Edition by Tilley
000 237
Essentials of Database Management 1st Edition Hoffer Test Bank
A Radical Challenge in Reliability Dynamic Life Test.pdf; Burn-In program Con...
DY0-001 Certification PDF – Dyson Product Specialist Q&A Guide
SAP_ Carve- out_ for_Globa l_Rollout.pdf
WeightWatcher Update: January 2021
IE431 Final Presentation
Javier Garcia - Verdugo Sanchez - Six Sigma Training - W1 Process Capability
Test Bank for Systems Analysis and Design 11th Edition by Tilley
Essentials of Database Management 1st Edition Hoffer Test Bank
Conf42 - AI Augmented Platform Engineering.pdf
SAP Carve out for Global Rollout.pdf
Ad

More from 岳華 杜 (20)

PDF
[COSCUP 2023] 我的Julia軟體架構演進之旅
PDF
Julia: The language for future
PDF
The Language for future-julia
PDF
20190907 Julia the language for future
PPTX
Metaprogramming in julia
PPTX
Introduction to julia
PDF
自然語言處理概覽
PPTX
Introduction to machine learning
PDF
Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
PDF
從 VAE 走向深度學習新理論
PDF
COSCUP: Foreign Function Call in Julia
PDF
COSCUP: Metaprogramming in Julia
PPTX
COSCUP: Introduction to Julia
PPTX
Introduction to Julia
PPTX
20180506 Introduction to machine learning
PPTX
20171127 當julia遇上資料科學
PPTX
20171117 oop and design patterns in julia
PPTX
20171014 tips for manipulating filesystem in julia
PDF
20170807 julia的簡單而高效資料處理
PDF
20170715 北Bio meetup
[COSCUP 2023] 我的Julia軟體架構演進之旅
Julia: The language for future
The Language for future-julia
20190907 Julia the language for future
Metaprogramming in julia
Introduction to julia
自然語言處理概覽
Introduction to machine learning
Semantic Segmentation - Fully Convolutional Networks for Semantic Segmentation
從 VAE 走向深度學習新理論
COSCUP: Foreign Function Call in Julia
COSCUP: Metaprogramming in Julia
COSCUP: Introduction to Julia
Introduction to Julia
20180506 Introduction to machine learning
20171127 當julia遇上資料科學
20171117 oop and design patterns in julia
20171014 tips for manipulating filesystem in julia
20170807 julia的簡單而高效資料處理
20170715 北Bio meetup
Ad

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Empathic Computing: Creating Shared Understanding
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Electronic commerce courselecture one. Pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Approach and Philosophy of On baking technology
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Spectroscopy.pptx food analysis technology
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Cloud computing and distributed systems.
Spectral efficient network and resource selection model in 5G networks
Building Integrated photovoltaic BIPV_UPV.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Programs and apps: productivity, graphics, security and other tools
Reach Out and Touch Someone: Haptics and Empathic Computing
Empathic Computing: Creating Shared Understanding
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Electronic commerce courselecture one. Pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
sap open course for s4hana steps from ECC to s4
Approach and Philosophy of On baking technology
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Spectroscopy.pptx food analysis technology
The AUB Centre for AI in Media Proposal.docx
Machine learning based COVID-19 study performance prediction
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Cloud computing and distributed systems.

Batch normalization 與他愉快的小伙伴