SlideShare a Scribd company logo
Martin Ayvazyan
Activation distribution in
a neural network
20 August 2018
Restricted © 2017 Mentor Graphics Corporation
Contents
 Why are neural networks so important
 Basic network characteristics
 Basic description of the process, formulation of algorithms,
comparison of behavioral characteristics
 Common behavioral characteristics of the process
 A visual representation of the process
 Core characteristics of the software
Activation distribution in a neural network, 20182
Restricted © 2017 Mentor Graphics Corporation
Cell Division and Cancer
Activation distribution in a neural network, 20183
Restricted © 2017 Mentor Graphics Corporation
The global economic crisis prevention /
elimination
Activation distribution in a neural network, 20184
Restricted © 2017 Mentor Graphics Corporation
Traffic control / jams prevention
Activation distribution in a neural network, 20185
Restricted © 2017 Mentor Graphics Corporation
Deep learning
Activation distribution in a neural network, 20186
Restricted © 2017 Mentor Graphics Corporation
Astronomy
Activation distribution in a neural network, 20187
Restricted © 2017 Mentor Graphics Corporation
Basic definitions
ER model
G(N, p)
N – nodes count in the graph
p – component of connectivity
P(G) – the probability that the graph is
connected
Activation distribution in a neural network, 20188
• p =
𝑐∙𝑙𝑛𝑁
𝑁
&& c>1 => graph is connected (c < 1 => graph is disconnected)
• p =
𝑐∙𝑙𝑛𝑁
𝑁
&& (c>3 && N > 100) => 𝑃 𝑁,𝑝 𝐺 ≥ 1 −
1
𝑁

𝑛
𝑁∗ 𝑁−1
2
 𝑃(𝐺0) = 𝑝 𝑛
∙ (1 − 𝑝)
𝑁∙(𝑁−1)
2
−𝑛
 𝑃(𝐺′) =
𝑛
𝑁∙(𝑁−1)
2
∙ 𝑝 𝑛 ∙ (1 − 𝑝)(
𝑁∙ 𝑁−1
2
− 𝑛)
Restricted © 2017 Mentor Graphics Corporation
Small world
Activation distribution in a neural network, 20189
Restricted © 2017 Mentor Graphics Corporation
Clusterization
Activation distribution in a neural network, 201810
𝐶𝑗 𝑞 𝑗 =
𝑡𝑗
𝑞 𝑗 ∙ (𝑞 𝑗 − 1)/2
𝐶 =
1
𝑁
𝑗
𝐶𝑗
Restricted © 2017 Mentor Graphics Corporation
Degree distribution
Activation distribution in a neural network, 201811
Restricted © 2017 Mentor Graphics Corporation
Algorithm A
 Input: a network, list of active nodes in the network, μ, λ
 Step 1: If there is no an active node in the network – exit
 Step 2: Randomly is selected a node
 Step 3: If the selected node doesn’t active – go to Step 2
 Step 4: Randomly is selected an adjacent node and in case if the
neighbor doesn’t active – activate it with
𝛌
(𝛌+ 𝛍)
probability
 Step 5: The temporary selected node is deactivated with
𝜇
(𝜆+ 𝜇)
probability
 Step 6: Go to Step 1
Activation distribution in a neural network, 201812
Restricted © 2017 Mentor Graphics Corporation
Algorithm A
𝑁=1024, 𝑝=0.5, μ=0.0
Activation distribution in a neural network, 201813
Restricted © 2017 Mentor Graphics Corporation
Algorithm A
Activation distribution in a neural network, 201814
P={0.5, 0.1, 0.05} λ = 0.005 P={0.5, 0.1, 0.05} λ = 0.05
Restricted © 2017 Mentor Graphics Corporation
Algorithm B
 Input: a network, list of active nodes in the network, μ, λ
 Step 1: If there is no an active node in the network – exit
 Step 2: Randomly is selected a node
 Step 3: If the selected node doesn’t active – go to Step 2
 Step 4: Each adjacent inactive node is activated with
𝛌
(𝛌+ 𝛍)
probability
 Step 5: The temporary selected node is deactivated with
𝜇
(𝜆+ 𝜇)
probability
 Step 6: Go to Step 1
Activation distribution in a neural network, 201815
Restricted © 2017 Mentor Graphics Corporation
Algorithm B
Activation distribution in a neural network, 201816
N=1024, p=0.5, μ=0.0
Restricted © 2017 Mentor Graphics Corporation
Algorithm B
Activation distribution in a neural network, 201817
N=1024, P = {0.5, 0.1, 0.05}, λ = 0.05 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005
Restricted © 2017 Mentor Graphics Corporation
Algorithm C
 Input: a network, list of active nodes in the network, μ, λ
 Step 1: If there is no an active node in the network – exit
 Step 2: For each active note of the current stage:
— Step 2.1: Randomly is selected an adjacent node and in case if the
neighbor doesn’t active – activate it with
𝛌
(𝛌+ 𝛍)
probability
— Step 2.2: The node is deactivated with
𝜇
(𝜆+ 𝜇)
probability
 Step 3: Go to Step 1
Activation distribution in a neural network, 201818
Restricted © 2017 Mentor Graphics Corporation
Algorithm C
Activation distribution in a neural network, 201819
N=1024, p=0.5, μ=0.0
Restricted © 2017 Mentor Graphics Corporation
Algorithm C
Activation distribution in a neural network, 201820
N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005
Restricted © 2017 Mentor Graphics Corporation
Common behavioral characteristics of the process
ER(n,p), active_nodes_count = 1
 The probability that in the current step will be activated at least one
node:
Activation distribution in a neural network, 201821
‘A’ ‘B’ ‘C’
1
𝑛
∙ 𝜆
1
𝑛
∙ (1 − 1 − 𝜆 𝑝∙ 𝑛−1 )
𝜆
Restricted © 2017 Mentor Graphics Corporation
Common behavioral characteristics of the process
 The probability that in the current step will be activated m / s.t {1 < m <= p ∙ (n − 1)}
nodes:
 The probability that after the current step in the network there will be no active nodes:
Activation distribution in a neural network, 201822
‘A’ ‘B’ ‘C’
0 1
𝑛
∙ (
𝑚
𝑝 ∙ 𝑛 − 1
∙ 𝜆 𝑚
∙ 1 − 𝜆 𝑝∙ 𝑛−1 −𝑚
) 0
‘A’ ‘B’ ‘C’
μ∙(
1
𝑛
∙ (1 − 𝜆)) μ∙(
1
𝑛
∙ 1 − 𝜆 𝑝∙(𝑛−1)) μ ∙ 1 − 𝜆
Restricted © 2017 Mentor Graphics Corporation
A visual representation of the process
ER(n, p) / n=65, p = 005, active_nodes_count=65, 𝜆 = 0.314159265, μ=1.0
Activation distribution in a neural network, 201823
0 20 30
50 500 1500
Restricted © 2017 Mentor Graphics Corporation
Core characteristics of the software
 Accuracy
 Efficiency
 Platform independency
Activation distribution in a neural network, 201824
Restricted © 2017 Mentor Graphics Corporation
www.mentor.com

More Related Content

PDF
Entity embeddings for categorical data
PDF
G. Park, J.-Y. Yang, et. al., NeurIPS 2020, MLILAB, KAIST AI
PPTX
Everything is not a graph problem. But, there are plenty.
PDF
J. Park, H. Shim, AAAI 2022, MLILAB, KAISTAI
PDF
Dueling Network Architectures for Deep Reinforcement Learning
PDF
J. Park, AAAI 2022, MLILAB, KAIST AI
PDF
Visual Question Answering 2.0
PPTX
Ashfaq Munshi, ML7 Fellow, Pepperdata
Entity embeddings for categorical data
G. Park, J.-Y. Yang, et. al., NeurIPS 2020, MLILAB, KAIST AI
Everything is not a graph problem. But, there are plenty.
J. Park, H. Shim, AAAI 2022, MLILAB, KAISTAI
Dueling Network Architectures for Deep Reinforcement Learning
J. Park, AAAI 2022, MLILAB, KAIST AI
Visual Question Answering 2.0
Ashfaq Munshi, ML7 Fellow, Pepperdata

What's hot (6)

PPTX
A Fast Content-Based Image Retrieval Method Using Deep Visual Features
PPTX
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
PPTX
T. Yoon, et. al., ICLR 2021, MLILAB, KAIST AI
PPTX
Efficiency gains in inversion based interpretation through computer
PPTX
(141205) Masters_Thesis_Defense_Sundong_Kim
PPTX
2018 Global Azure Bootcamp Azure Machine Learning for neural networks
A Fast Content-Based Image Retrieval Method Using Deep Visual Features
Hanjun Dai, PhD Student, School of Computational Science and Engineering, Geo...
T. Yoon, et. al., ICLR 2021, MLILAB, KAIST AI
Efficiency gains in inversion based interpretation through computer
(141205) Masters_Thesis_Defense_Sundong_Kim
2018 Global Azure Bootcamp Azure Machine Learning for neural networks
Ad

Similar to Activation distribution in a neural network (20)

PPTX
Introduction to artificial neural network and deep learning
PPTX
lecture13-NN-basics.pptx
PPTX
Introduction to Neural Networks By Simon Haykins
PPTX
mohsin dalvi artificial neural networks presentation
PPTX
14. mohsin dalvi artificial neural networks presentation
PPT
Ann
PPTX
Neural net NWU 4.3 Graphics Course
PPT
2011 0480.neural-networks
PDF
Deep learning
PDF
7 nn1-intro.ppt
PPT
Neural-Networks.ppt
PDF
Analytical and Systematic Study of Artificial Neural Network
PPTX
Neural Network_basic_Reza_Lecture_3.pptx
PPTX
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
PPTX
Artificial Neural Networks for NIU session 2016 17
PPTX
Reason To Switch to DNNDNNs excel in handling huge volumes of data (e.g., ima...
PDF
Separating Hype from Reality in Deep Learning with Sameer Farooqui
PPT
chapter one introduction to nueral networks
PPT
The Introduction to Neural Networks.ppt
Introduction to artificial neural network and deep learning
lecture13-NN-basics.pptx
Introduction to Neural Networks By Simon Haykins
mohsin dalvi artificial neural networks presentation
14. mohsin dalvi artificial neural networks presentation
Ann
Neural net NWU 4.3 Graphics Course
2011 0480.neural-networks
Deep learning
7 nn1-intro.ppt
Neural-Networks.ppt
Analytical and Systematic Study of Artificial Neural Network
Neural Network_basic_Reza_Lecture_3.pptx
Deep Learning Tutorial | Deep Learning Tutorial For Beginners | What Is Deep ...
Artificial Neural Networks for NIU session 2016 17
Reason To Switch to DNNDNNs excel in handling huge volumes of data (e.g., ima...
Separating Hype from Reality in Deep Learning with Sameer Farooqui
chapter one introduction to nueral networks
The Introduction to Neural Networks.ppt
Ad

Recently uploaded (20)

PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PDF
annual-report-2024-2025 original latest.
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Foundation of Data Science unit number two notes
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
1_Introduction to advance data techniques.pptx
PDF
Business Analytics and business intelligence.pdf
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
annual-report-2024-2025 original latest.
Business Acumen Training GuidePresentation.pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
climate analysis of Dhaka ,Banglades.pptx
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Introduction-to-Cloud-ComputingFinal.pptx
Foundation of Data Science unit number two notes
Qualitative Qantitative and Mixed Methods.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Clinical guidelines as a resource for EBP(1).pdf
1_Introduction to advance data techniques.pptx
Business Analytics and business intelligence.pdf
Galatica Smart Energy Infrastructure Startup Pitch Deck

Activation distribution in a neural network

  • 1. Martin Ayvazyan Activation distribution in a neural network 20 August 2018
  • 2. Restricted © 2017 Mentor Graphics Corporation Contents  Why are neural networks so important  Basic network characteristics  Basic description of the process, formulation of algorithms, comparison of behavioral characteristics  Common behavioral characteristics of the process  A visual representation of the process  Core characteristics of the software Activation distribution in a neural network, 20182
  • 3. Restricted © 2017 Mentor Graphics Corporation Cell Division and Cancer Activation distribution in a neural network, 20183
  • 4. Restricted © 2017 Mentor Graphics Corporation The global economic crisis prevention / elimination Activation distribution in a neural network, 20184
  • 5. Restricted © 2017 Mentor Graphics Corporation Traffic control / jams prevention Activation distribution in a neural network, 20185
  • 6. Restricted © 2017 Mentor Graphics Corporation Deep learning Activation distribution in a neural network, 20186
  • 7. Restricted © 2017 Mentor Graphics Corporation Astronomy Activation distribution in a neural network, 20187
  • 8. Restricted © 2017 Mentor Graphics Corporation Basic definitions ER model G(N, p) N – nodes count in the graph p – component of connectivity P(G) – the probability that the graph is connected Activation distribution in a neural network, 20188 • p = 𝑐∙𝑙𝑛𝑁 𝑁 && c>1 => graph is connected (c < 1 => graph is disconnected) • p = 𝑐∙𝑙𝑛𝑁 𝑁 && (c>3 && N > 100) => 𝑃 𝑁,𝑝 𝐺 ≥ 1 − 1 𝑁  𝑛 𝑁∗ 𝑁−1 2  𝑃(𝐺0) = 𝑝 𝑛 ∙ (1 − 𝑝) 𝑁∙(𝑁−1) 2 −𝑛  𝑃(𝐺′) = 𝑛 𝑁∙(𝑁−1) 2 ∙ 𝑝 𝑛 ∙ (1 − 𝑝)( 𝑁∙ 𝑁−1 2 − 𝑛)
  • 9. Restricted © 2017 Mentor Graphics Corporation Small world Activation distribution in a neural network, 20189
  • 10. Restricted © 2017 Mentor Graphics Corporation Clusterization Activation distribution in a neural network, 201810 𝐶𝑗 𝑞 𝑗 = 𝑡𝑗 𝑞 𝑗 ∙ (𝑞 𝑗 − 1)/2 𝐶 = 1 𝑁 𝑗 𝐶𝑗
  • 11. Restricted © 2017 Mentor Graphics Corporation Degree distribution Activation distribution in a neural network, 201811
  • 12. Restricted © 2017 Mentor Graphics Corporation Algorithm A  Input: a network, list of active nodes in the network, μ, λ  Step 1: If there is no an active node in the network – exit  Step 2: Randomly is selected a node  Step 3: If the selected node doesn’t active – go to Step 2  Step 4: Randomly is selected an adjacent node and in case if the neighbor doesn’t active – activate it with 𝛌 (𝛌+ 𝛍) probability  Step 5: The temporary selected node is deactivated with 𝜇 (𝜆+ 𝜇) probability  Step 6: Go to Step 1 Activation distribution in a neural network, 201812
  • 13. Restricted © 2017 Mentor Graphics Corporation Algorithm A 𝑁=1024, 𝑝=0.5, μ=0.0 Activation distribution in a neural network, 201813
  • 14. Restricted © 2017 Mentor Graphics Corporation Algorithm A Activation distribution in a neural network, 201814 P={0.5, 0.1, 0.05} λ = 0.005 P={0.5, 0.1, 0.05} λ = 0.05
  • 15. Restricted © 2017 Mentor Graphics Corporation Algorithm B  Input: a network, list of active nodes in the network, μ, λ  Step 1: If there is no an active node in the network – exit  Step 2: Randomly is selected a node  Step 3: If the selected node doesn’t active – go to Step 2  Step 4: Each adjacent inactive node is activated with 𝛌 (𝛌+ 𝛍) probability  Step 5: The temporary selected node is deactivated with 𝜇 (𝜆+ 𝜇) probability  Step 6: Go to Step 1 Activation distribution in a neural network, 201815
  • 16. Restricted © 2017 Mentor Graphics Corporation Algorithm B Activation distribution in a neural network, 201816 N=1024, p=0.5, μ=0.0
  • 17. Restricted © 2017 Mentor Graphics Corporation Algorithm B Activation distribution in a neural network, 201817 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.05 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005
  • 18. Restricted © 2017 Mentor Graphics Corporation Algorithm C  Input: a network, list of active nodes in the network, μ, λ  Step 1: If there is no an active node in the network – exit  Step 2: For each active note of the current stage: — Step 2.1: Randomly is selected an adjacent node and in case if the neighbor doesn’t active – activate it with 𝛌 (𝛌+ 𝛍) probability — Step 2.2: The node is deactivated with 𝜇 (𝜆+ 𝜇) probability  Step 3: Go to Step 1 Activation distribution in a neural network, 201818
  • 19. Restricted © 2017 Mentor Graphics Corporation Algorithm C Activation distribution in a neural network, 201819 N=1024, p=0.5, μ=0.0
  • 20. Restricted © 2017 Mentor Graphics Corporation Algorithm C Activation distribution in a neural network, 201820 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005 N=1024, P = {0.5, 0.1, 0.05}, λ = 0.0005
  • 21. Restricted © 2017 Mentor Graphics Corporation Common behavioral characteristics of the process ER(n,p), active_nodes_count = 1  The probability that in the current step will be activated at least one node: Activation distribution in a neural network, 201821 ‘A’ ‘B’ ‘C’ 1 𝑛 ∙ 𝜆 1 𝑛 ∙ (1 − 1 − 𝜆 𝑝∙ 𝑛−1 ) 𝜆
  • 22. Restricted © 2017 Mentor Graphics Corporation Common behavioral characteristics of the process  The probability that in the current step will be activated m / s.t {1 < m <= p ∙ (n − 1)} nodes:  The probability that after the current step in the network there will be no active nodes: Activation distribution in a neural network, 201822 ‘A’ ‘B’ ‘C’ 0 1 𝑛 ∙ ( 𝑚 𝑝 ∙ 𝑛 − 1 ∙ 𝜆 𝑚 ∙ 1 − 𝜆 𝑝∙ 𝑛−1 −𝑚 ) 0 ‘A’ ‘B’ ‘C’ μ∙( 1 𝑛 ∙ (1 − 𝜆)) μ∙( 1 𝑛 ∙ 1 − 𝜆 𝑝∙(𝑛−1)) μ ∙ 1 − 𝜆
  • 23. Restricted © 2017 Mentor Graphics Corporation A visual representation of the process ER(n, p) / n=65, p = 005, active_nodes_count=65, 𝜆 = 0.314159265, μ=1.0 Activation distribution in a neural network, 201823 0 20 30 50 500 1500
  • 24. Restricted © 2017 Mentor Graphics Corporation Core characteristics of the software  Accuracy  Efficiency  Platform independency Activation distribution in a neural network, 201824
  • 25. Restricted © 2017 Mentor Graphics Corporation www.mentor.com