SlideShare a Scribd company logo
Neural networks
Feedforward neural network - activation function
ARTIFICIAL NEURON
2
Topics: connection weights, bias, activation function
• Neuron pre-activation (or input activation):
• Neuron (output) activation
• are the connection weights
• is the neuron bias
• is called the activation function
...
1
September 6
Abstrac
Math for my slides “Feedforward neural n
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd
• w
• {
Septe
Math for my slides “Feedforwar
• a(x) = b +
P
i wixi = b + w>
• h(x) = g(a(x)) = g(b +
P
i wi
• x1 xd
• w
• {
September 6, 2012
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(·) b
September 6, 2012
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(·) b
Abst
Math for my slides “Feedforward neu
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(·) b
September 6, 2012
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(·) b
hugo.larochelle@usherbrooke.ca
September 6, 2012
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(·) b
Abstract
Math for my slides “Feedforward neural network”.
x) = b +
P
i wixi = b + w>
x
(x) = g(a(x)) = g(b +
P
i wixi)
1 xd b w1 wd
·) b
(x) = g(a(x))
iversité de Sherbrooke
arochelle@usherbrooke.ca
September 6, 2012
Abstract
dforward neural network”.
+ w>
x
P
i wixi)
ACTIVATION FUNCTION
3
Topics: linear activation function
• Performs no input
squashing
• Not very interesting...
Abstract
Math for my slides “Feedforward neural n
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(a) = a
1
ACTIVATION FUNCTION
4
Topics: sigmoid activation function
• Squashes the neuron’s
pre-activation between
0 and 1
• Always positive
• Bounded
• Strictly increasing
Abstract
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(a) = a
• g(a) = sigm(a) = 1
1+exp( a)
exp(a) exp( a) exp(2a) 1
ACTIVATION FUNCTION
5
Topics: hyperbolic tangent (‘‘tanh’’) activation function
• Squashes the neuron’s
pre-activation between
-1 and 1
• Can be positive or
negative
• Bounded
• Strictly increasing
Math for my slides “Feedforward neural network”.
• a(x) = b +
P
i wixi = b + w>
x
• h(x) = g(a(x)) = g(b +
P
i wixi)
• x1 xd b w1 wd
• w
• {
• g(a) = a
• g(a) = sigm(a) = 1
1+exp( a)
• g(a) = tanh(a) = exp(a) exp( a)
exp(a)+exp( a) = exp(2a) 1
exp(2a)+1
ACTIVATION FUNCTION
6
Topics: rectified linear activation function
• Bounded below by 0
(always non-negative)
• Not upper bounded
• Strictly increasing
• Tends to give neurons
with sparse activities
• x1 xd b w1 wd
• w
• {
• g(a) = a
• g(a) = sigm(a) = 1
1+exp( a)
• g(a) = tanh(a) = exp(a) exp( a)
exp(a)+exp( a) = exp(2a) 1
exp(2a)+1
• g(a) = max(0, a)
• g(a) = reclin(a) = max(0, a)

More Related Content

PPTX
Deep Learning Presentation that has been
PDF
Supervised Prediction of Graph Summaries
PDF
RNNs for Timeseries Analysis
PPTX
Lec10.pptx
PDF
Machine(s) Learning with Neural Networks
PPTX
Digital Logic Design presentation Boolean Algebra and Logic Gates.pptx
PPTX
How to prepare a Short Bio-MD Mojaharul Islam
PPTX
How to prepare a Short Bio- Saimun Hridoy
Deep Learning Presentation that has been
Supervised Prediction of Graph Summaries
RNNs for Timeseries Analysis
Lec10.pptx
Machine(s) Learning with Neural Networks
Digital Logic Design presentation Boolean Algebra and Logic Gates.pptx
How to prepare a Short Bio-MD Mojaharul Islam
How to prepare a Short Bio- Saimun Hridoy

Recently uploaded (20)

DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
web development for engineering and engineering
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Construction Project Organization Group 2.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Well-logging-methods_new................
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Digital Logic Computer Design lecture notes
PDF
PPT on Performance Review to get promotions
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
web development for engineering and engineering
UNIT 4 Total Quality Management .pptx
Construction Project Organization Group 2.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Foundation to blockchain - A guide to Blockchain Tech
CH1 Production IntroductoryConcepts.pptx
Well-logging-methods_new................
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Digital Logic Computer Design lecture notes
PPT on Performance Review to get promotions
R24 SURVEYING LAB MANUAL for civil enggi
CYBER-CRIMES AND SECURITY A guide to understanding
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Ad
Ad

What is activation function in the context of neural network?

  • 1. Neural networks Feedforward neural network - activation function
  • 2. ARTIFICIAL NEURON 2 Topics: connection weights, bias, activation function • Neuron pre-activation (or input activation): • Neuron (output) activation • are the connection weights • is the neuron bias • is called the activation function ... 1 September 6 Abstrac Math for my slides “Feedforward neural n • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd • w • { Septe Math for my slides “Feedforwar • a(x) = b + P i wixi = b + w> • h(x) = g(a(x)) = g(b + P i wi • x1 xd • w • { September 6, 2012 Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(·) b September 6, 2012 Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(·) b Abst Math for my slides “Feedforward neu • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(·) b September 6, 2012 Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(·) b hugo.larochelle@usherbrooke.ca September 6, 2012 Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(·) b Abstract Math for my slides “Feedforward neural network”. x) = b + P i wixi = b + w> x (x) = g(a(x)) = g(b + P i wixi) 1 xd b w1 wd ·) b (x) = g(a(x)) iversité de Sherbrooke arochelle@usherbrooke.ca September 6, 2012 Abstract dforward neural network”. + w> x P i wixi)
  • 3. ACTIVATION FUNCTION 3 Topics: linear activation function • Performs no input squashing • Not very interesting... Abstract Math for my slides “Feedforward neural n • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(a) = a 1
  • 4. ACTIVATION FUNCTION 4 Topics: sigmoid activation function • Squashes the neuron’s pre-activation between 0 and 1 • Always positive • Bounded • Strictly increasing Abstract Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(a) = a • g(a) = sigm(a) = 1 1+exp( a) exp(a) exp( a) exp(2a) 1
  • 5. ACTIVATION FUNCTION 5 Topics: hyperbolic tangent (‘‘tanh’’) activation function • Squashes the neuron’s pre-activation between -1 and 1 • Can be positive or negative • Bounded • Strictly increasing Math for my slides “Feedforward neural network”. • a(x) = b + P i wixi = b + w> x • h(x) = g(a(x)) = g(b + P i wixi) • x1 xd b w1 wd • w • { • g(a) = a • g(a) = sigm(a) = 1 1+exp( a) • g(a) = tanh(a) = exp(a) exp( a) exp(a)+exp( a) = exp(2a) 1 exp(2a)+1
  • 6. ACTIVATION FUNCTION 6 Topics: rectified linear activation function • Bounded below by 0 (always non-negative) • Not upper bounded • Strictly increasing • Tends to give neurons with sparse activities • x1 xd b w1 wd • w • { • g(a) = a • g(a) = sigm(a) = 1 1+exp( a) • g(a) = tanh(a) = exp(a) exp( a) exp(a)+exp( a) = exp(2a) 1 exp(2a)+1 • g(a) = max(0, a) • g(a) = reclin(a) = max(0, a)