SlideShare a Scribd company logo
Biological Neuron Artificial Neuron
• Bio-ANN
[https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligen
ce_neural_networks.htm] [https://ujjwalkarn.me/2016/08/09/quick-intro-
neural-networks/]
• Activation functions [https://en.wikipedia.org/wiki/Activation_function]
• Layer of Neurons []
• Role of Bias- [https://stackoverflow.com/questions/2480650/role-of-bias-
in-neural-networks]
• McCulloch Pitts Model (unsup)
[https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/]-
• Perceptron (sup) []
• Learning: Supervised/Unsupervised/Reinforcement
[https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligen
ce_neural_networks.htm]
• Applications of Neural Network
• ANN learning methods
• Desirable properties of ANN- stability, plasticity
• Introduction to Back Propagation Networks,
Biological NN Vs ANN
• Characteristic abilities of Biological
neural systems
– pattern recognition
– perception
– motor control
– Memorize
– Learn
– Generalize
• Components
– Neurons - basic building blocks of
biological neural systems are nerve
cells, referred to as
– Synapses – interconnection between
the axon of one neuron and a
dendrite of another neuron
• Algorithmic models of the features of
biological neural systems are called
“artificial neural networks (ANN)”
Order of 10-500 billion neurons in the human cortex, with 60 trillion synapses.
Arranged in approximately 1000 main modules, each with 500 neural networks.
Types of Learning in ANN
• FeedForward ANN
– the information flow is
unidirectional.
– A unit sends information
to other unit from which
it does not receive any
information.
– There are no feedback
loops.
– Application: pattern
generation/recognition/cl
assification.
• Feedback ANN
– feedback loops are
allowed.
– Application: content
addressable memories
More Types of ANNs
• Single-layer NNs, such as the Hopfield network;
• Multilayer feedforward NNs, including, for example, standard
backpropagation, functional link and product unit networks;
• Temporal NNs, such as the Elman and Jordan simple recurrent
networks as well as time-delay neural networks;
• Self-organizing NNs, such as the Kohonen self-organizing
feature maps and the learning vector quantizer;
• Combined feedforward and self-organizing NNs, such as the
radial basis function networks.
Single Neuron
• Components
– X1, X2: Numerical Input
– f is non-linear and is called the Activation Function - takes a single
number and performs a certain fixed mathematical operation on it
– 1: Bias with weight b .
Activation Fuction
[Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley]
Linear : produces a linearly
modulated output, where
ß is a constant.
f (net -)= ß(net - )
Step: takes a real-valued
input and squashes it to
the range [ ß1, ß2], binary
or bipolar.
f (net -)= ß1 if (net ≥ )
= ß2 if (net <)
Ramp: is a combination of the
linear and step functions
f (net -) = ß if (net- ≥ ß)
= net - if |net -|<ß
= -ß if (net- < ß)
Sigmoid: takes a real-
valued input and squashes
it to range (0, 1).
ß controls the steepness.
tanh: takes a real-valued
input and squashes it to
the range [-1, 1].
ß controls the steepness.
ReLU (Rectified Linear Unit):
takes a real-valued input
and thresholds it at zero
(replaces negative values
with zero)
f(x) = max(0, x)
Activation Fuction
[Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley]
Layers of Neurons: (e.g. in Feedforward NN)
• Input nodes
– No computation is performed in any of the Input nodes
– They just pass on the information to the hidden nodes
• Hidden nodes
– They perform computations and transfer information from the input
nodes to the output nodes.
– There can be zero or multiple hidden layers
• Output nodes
– Responsible for computations and transferring information from the
network to the outside world
– One output node for one decision parameter
McCulloch-Pitts-neuron
• First ever primitive model of biological neuron was conceptualized by Warren
Sturgis McCulloch and Walter Harry Pitts in 1943
• Elements-
– Neuron:computational in which the input signals are computed and an output is fired
• Summation Function- This simply calculates the sum of incoming inputs(excitatory).
• Activation Function - Essentially this is the step function which sees if the summation
is more than equal to a preset Threshold value , if yes then neuron should fire (i.e.
output =1 ) if not the neuron should not fire (i.e. output =0).
– Neuron fires: Output =1 , if Summation >= Threshold
– Neuron does not fires: Output =0 , if Summation < Threshold
– Excitatory Input : This is an incoming binary signals to neuron, which can have only two
values 0 (=OFF) or 1 (=ON)
– Inhibitory Input : If this input is on, this will now allow neuron to fire , even if there are
other excitatory inputs which are on.
– Output : The value of 0 indicates that the neuron does not fire, the value of 1 indicates
the neuron does fire.
Function of McCulloch-Pitts Model
• Design-
– McCulloch-Pitts neuron model can be used to compute some
simple functions which involves binary input and output.
• Steps -
– The input signals are switched on and the neuron is activated.
– If Neuron detects that Inhibitory input is switched on, the
output is straightaway zero, which means the neuron does not
fire.
– If there is no Inhibitory input, then neuron proceeds to calculate
the sum of number of excitatory inputs that are switched on.
– If this sum is greater than equal to the preset threshold value,
the neuron fires (output=1) , otherwise the neuron does not fire
(output=0)
Illustration of McCulloch-Pitts Model
https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Illustration of McCulloch-Pitts Model
Design of McCulloch-Pitts Neuron
for AND Function
• For the neuron to fire, both excitatory input signals have to be
enabled.
• So it is very intuitive that the threshold value should be 2 .
• Additionally if the inhibitory input is on, then irrespective of
any other input, the neuron will not fire.
https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Design of McCulloch-Pitts Neuron
for OR Function
• For the neuron to fire, at least 1 excitatory input signals has to
be enabled.
• So it is very intuitive that the threshold value should be 1 .
• Additionally if the inhibitory input is on, then irrespective of
any other input, the neuron will not fire.
https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Design of McCulloch-Pitts Neuron
for Real Life Decision Making
• Problem - You like going to for a particular movie if it is a new
release. But you watch a movie if the ticket price is cheap. Further
you cant plan movie on your weekdays as they are busy
• Design
– Excitatory Inputs
• X1- IsMovieNew
• X2- IsTicketCheap
– Output Function : AND (since Ouput is 1 only if both X1 and X2 are 1)
– Inhibitory
• IsWeekday : If it is on for the neuron, then there is no question of planning for the movie.
https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Limitation of McCulloch-Pitts Model
• There is No (machine) learning in this model
• This model was not built to work as machine
learning model in the first place.
• Rather McCulloch and Pitts just wanted to
build a mathematical model to represent the
workings of biological neuron.
• But this humble looking model actually
inspired other researchers to come up with
true machine learning based neural models in
the later years
Learning Methods
• Supervised Learning
– The model is trained using examples of expected output values for
each input combination.
– For example, pattern recognizing. The ANN comes up with a guess for
given input vector, then compares the guess with the corresponding
“correct” output value and makes adjustments in weights according to
errors.
• Unsupervised Learning
– It is required when there is no example data set with known answers.
– For example, searching for a hidden pattern. In this case, clustering
i.e. dividing a set of elements into groups according to some unknown
pattern is carried out based on the existing data sets present.
• Reinforcement Learning
– This strategy is built on observation.
– In this method, the ANN makes a decision by observing its
environment. If the observation is negative, the network adjusts its
weights to be able to make a different required decision the next time.
Applications of ANN
• Classification - where the aim is to predict the class of an input
vector;
• Pattern matching - where the aim is to produce a pattern best
associated with a given input vector;
• Pattern completion - where the aim is to complete the missing parts
of a given input vector;
• Optimization-where the aim is to find the optimal values of
parameters in an optimization problem;
• Control - where, given an input vector, an appropriate action is
suggested;
• Function approximation/times series modeling - where the aim is
to learn the functional relationships between input and desired
output vectors;
• Data mining - with the aim of discovering hidden patterns from
data – also referred to as knowledge discovery.

More Related Content

PPTX
Artificial Neural Networks for NIU session 2016 17
PPT
artificial-neural-networks overviewrev.ppt
PPT
How Artificial Neural Network works rev.ppt
PPT
artificial-neural-networks Introduction.ppt
PPT
artificial-neural-networks-revision .ppt
PPT
artificial-neural-networks details with examples
PPTX
ANN lecture data mining by Muhammad faraz.pptx
PPT
artificial-neural-networks-rev.ppt
Artificial Neural Networks for NIU session 2016 17
artificial-neural-networks overviewrev.ppt
How Artificial Neural Network works rev.ppt
artificial-neural-networks Introduction.ppt
artificial-neural-networks-revision .ppt
artificial-neural-networks details with examples
ANN lecture data mining by Muhammad faraz.pptx
artificial-neural-networks-rev.ppt

Similar to Artificial Neural Network.pptx (20)

PPT
artificial-neural-networks-rev.ppt
PDF
ppt_artificial neural netwroK_module 3_KTU
PDF
ANN stands for Artificial Neural Network
PPTX
Mc Culloch Pitts Neuron
PDF
Dl-PerceptronsCO1-Perceptrons2 Perceptrons.pdf
PDF
Artificial Neural Network-Types,Perceptron,Problems
PDF
Artificial intelligence training
PDF
Artificial intelligence training in pune pdf converted
PDF
Artificial intelligence training
PPTX
Artificial Neural Network
PDF
Lecture 4 neural networks
PPTX
Module 1: Fundamentals of neural network.pptx
PPT
Neural Networks in ARTIFICAL INTELLIGENCE
PPTX
Artificial Neural Networks for NIU
PPTX
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
PPT
SOFT COMPUTERING TECHNICS -Unit 1
PDF
Unit 6: Application of AI
PPT
PPTX
Neural network
PPTX
Introduction to Neural Networks By Simon Haykins
artificial-neural-networks-rev.ppt
ppt_artificial neural netwroK_module 3_KTU
ANN stands for Artificial Neural Network
Mc Culloch Pitts Neuron
Dl-PerceptronsCO1-Perceptrons2 Perceptrons.pdf
Artificial Neural Network-Types,Perceptron,Problems
Artificial intelligence training
Artificial intelligence training in pune pdf converted
Artificial intelligence training
Artificial Neural Network
Lecture 4 neural networks
Module 1: Fundamentals of neural network.pptx
Neural Networks in ARTIFICAL INTELLIGENCE
Artificial Neural Networks for NIU
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
SOFT COMPUTERING TECHNICS -Unit 1
Unit 6: Application of AI
Neural network
Introduction to Neural Networks By Simon Haykins
Ad

More from shashankbhadouria4 (20)

PPT
EC203DSD - Module 5 - 3.ppt
PPTX
IT201 Basics of Intelligent Systems-1.pptx
PPT
MO 2020 DS Doubly Linked List 1 AB.ppt
PPT
MO 2020 DS Stacks 3 AB.ppt
PPTX
A New Multi-Level Inverter Topology With Reduced Switch.pptx
PPTX
Birla Institute of Technology Mesra Jaipur.pptx
PPTX
EE306_EXP1.pptx
PPT
III_Data Structure_Module_1.ppt
PPT
Chap 6 Graph.ppt
PPT
Chap 5 Tree.ppt
PPT
Chap 2 Arrays and Structures.ppt
PPT
Chap 4 List of Data Structure.ppt
PPTX
SUmmer Training PPT FINAL.pptx
PPTX
RVPN TRAINING PPT.pptx
PPTX
MODULE 1.pptx
PPTX
MODULE 1.pptx
PPT
MO 2020 DS Applications of Linked List 1 AB.ppt
PPT
MO 2020 DS Stacks 1 AB.ppt
PPTX
III_Data Structure_Module_1.pptx
PPTX
Chap 2 Arrays and Structures.pptx
EC203DSD - Module 5 - 3.ppt
IT201 Basics of Intelligent Systems-1.pptx
MO 2020 DS Doubly Linked List 1 AB.ppt
MO 2020 DS Stacks 3 AB.ppt
A New Multi-Level Inverter Topology With Reduced Switch.pptx
Birla Institute of Technology Mesra Jaipur.pptx
EE306_EXP1.pptx
III_Data Structure_Module_1.ppt
Chap 6 Graph.ppt
Chap 5 Tree.ppt
Chap 2 Arrays and Structures.ppt
Chap 4 List of Data Structure.ppt
SUmmer Training PPT FINAL.pptx
RVPN TRAINING PPT.pptx
MODULE 1.pptx
MODULE 1.pptx
MO 2020 DS Applications of Linked List 1 AB.ppt
MO 2020 DS Stacks 1 AB.ppt
III_Data Structure_Module_1.pptx
Chap 2 Arrays and Structures.pptx
Ad

Recently uploaded (20)

PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Computer network topology notes for revision
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Introduction to machine learning and Linear Models
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Business Acumen Training GuidePresentation.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPT
Quality review (1)_presentation of this 21
PDF
Foundation of Data Science unit number two notes
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Computer network topology notes for revision
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Miokarditis (Inflamasi pada Otot Jantung)
IBA_Chapter_11_Slides_Final_Accessible.pptx
climate analysis of Dhaka ,Banglades.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Introduction to machine learning and Linear Models
Introduction-to-Cloud-ComputingFinal.pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Business Acumen Training GuidePresentation.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Supervised vs unsupervised machine learning algorithms
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Data_Analytics_and_PowerBI_Presentation.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Quality review (1)_presentation of this 21
Foundation of Data Science unit number two notes

Artificial Neural Network.pptx

  • 1. Biological Neuron Artificial Neuron • Bio-ANN [https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligen ce_neural_networks.htm] [https://ujjwalkarn.me/2016/08/09/quick-intro- neural-networks/] • Activation functions [https://en.wikipedia.org/wiki/Activation_function] • Layer of Neurons [] • Role of Bias- [https://stackoverflow.com/questions/2480650/role-of-bias- in-neural-networks] • McCulloch Pitts Model (unsup) [https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/]- • Perceptron (sup) [] • Learning: Supervised/Unsupervised/Reinforcement [https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligen ce_neural_networks.htm] • Applications of Neural Network • ANN learning methods • Desirable properties of ANN- stability, plasticity • Introduction to Back Propagation Networks,
  • 2. Biological NN Vs ANN • Characteristic abilities of Biological neural systems – pattern recognition – perception – motor control – Memorize – Learn – Generalize • Components – Neurons - basic building blocks of biological neural systems are nerve cells, referred to as – Synapses – interconnection between the axon of one neuron and a dendrite of another neuron • Algorithmic models of the features of biological neural systems are called “artificial neural networks (ANN)” Order of 10-500 billion neurons in the human cortex, with 60 trillion synapses. Arranged in approximately 1000 main modules, each with 500 neural networks.
  • 3. Types of Learning in ANN • FeedForward ANN – the information flow is unidirectional. – A unit sends information to other unit from which it does not receive any information. – There are no feedback loops. – Application: pattern generation/recognition/cl assification. • Feedback ANN – feedback loops are allowed. – Application: content addressable memories
  • 4. More Types of ANNs • Single-layer NNs, such as the Hopfield network; • Multilayer feedforward NNs, including, for example, standard backpropagation, functional link and product unit networks; • Temporal NNs, such as the Elman and Jordan simple recurrent networks as well as time-delay neural networks; • Self-organizing NNs, such as the Kohonen self-organizing feature maps and the learning vector quantizer; • Combined feedforward and self-organizing NNs, such as the radial basis function networks.
  • 5. Single Neuron • Components – X1, X2: Numerical Input – f is non-linear and is called the Activation Function - takes a single number and performs a certain fixed mathematical operation on it – 1: Bias with weight b .
  • 6. Activation Fuction [Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley] Linear : produces a linearly modulated output, where ß is a constant. f (net -)= ß(net - ) Step: takes a real-valued input and squashes it to the range [ ß1, ß2], binary or bipolar. f (net -)= ß1 if (net ≥ ) = ß2 if (net <) Ramp: is a combination of the linear and step functions f (net -) = ß if (net- ≥ ß) = net - if |net -|<ß = -ß if (net- < ß)
  • 7. Sigmoid: takes a real- valued input and squashes it to range (0, 1). ß controls the steepness. tanh: takes a real-valued input and squashes it to the range [-1, 1]. ß controls the steepness. ReLU (Rectified Linear Unit): takes a real-valued input and thresholds it at zero (replaces negative values with zero) f(x) = max(0, x) Activation Fuction [Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley]
  • 8. Layers of Neurons: (e.g. in Feedforward NN) • Input nodes – No computation is performed in any of the Input nodes – They just pass on the information to the hidden nodes • Hidden nodes – They perform computations and transfer information from the input nodes to the output nodes. – There can be zero or multiple hidden layers • Output nodes – Responsible for computations and transferring information from the network to the outside world – One output node for one decision parameter
  • 9. McCulloch-Pitts-neuron • First ever primitive model of biological neuron was conceptualized by Warren Sturgis McCulloch and Walter Harry Pitts in 1943 • Elements- – Neuron:computational in which the input signals are computed and an output is fired • Summation Function- This simply calculates the sum of incoming inputs(excitatory). • Activation Function - Essentially this is the step function which sees if the summation is more than equal to a preset Threshold value , if yes then neuron should fire (i.e. output =1 ) if not the neuron should not fire (i.e. output =0). – Neuron fires: Output =1 , if Summation >= Threshold – Neuron does not fires: Output =0 , if Summation < Threshold – Excitatory Input : This is an incoming binary signals to neuron, which can have only two values 0 (=OFF) or 1 (=ON) – Inhibitory Input : If this input is on, this will now allow neuron to fire , even if there are other excitatory inputs which are on. – Output : The value of 0 indicates that the neuron does not fire, the value of 1 indicates the neuron does fire.
  • 10. Function of McCulloch-Pitts Model • Design- – McCulloch-Pitts neuron model can be used to compute some simple functions which involves binary input and output. • Steps - – The input signals are switched on and the neuron is activated. – If Neuron detects that Inhibitory input is switched on, the output is straightaway zero, which means the neuron does not fire. – If there is no Inhibitory input, then neuron proceeds to calculate the sum of number of excitatory inputs that are switched on. – If this sum is greater than equal to the preset threshold value, the neuron fires (output=1) , otherwise the neuron does not fire (output=0)
  • 11. Illustration of McCulloch-Pitts Model https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
  • 13. Design of McCulloch-Pitts Neuron for AND Function • For the neuron to fire, both excitatory input signals have to be enabled. • So it is very intuitive that the threshold value should be 2 . • Additionally if the inhibitory input is on, then irrespective of any other input, the neuron will not fire. https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
  • 14. Design of McCulloch-Pitts Neuron for OR Function • For the neuron to fire, at least 1 excitatory input signals has to be enabled. • So it is very intuitive that the threshold value should be 1 . • Additionally if the inhibitory input is on, then irrespective of any other input, the neuron will not fire. https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
  • 15. Design of McCulloch-Pitts Neuron for Real Life Decision Making • Problem - You like going to for a particular movie if it is a new release. But you watch a movie if the ticket price is cheap. Further you cant plan movie on your weekdays as they are busy • Design – Excitatory Inputs • X1- IsMovieNew • X2- IsTicketCheap – Output Function : AND (since Ouput is 1 only if both X1 and X2 are 1) – Inhibitory • IsWeekday : If it is on for the neuron, then there is no question of planning for the movie. https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
  • 16. Limitation of McCulloch-Pitts Model • There is No (machine) learning in this model • This model was not built to work as machine learning model in the first place. • Rather McCulloch and Pitts just wanted to build a mathematical model to represent the workings of biological neuron. • But this humble looking model actually inspired other researchers to come up with true machine learning based neural models in the later years
  • 17. Learning Methods • Supervised Learning – The model is trained using examples of expected output values for each input combination. – For example, pattern recognizing. The ANN comes up with a guess for given input vector, then compares the guess with the corresponding “correct” output value and makes adjustments in weights according to errors. • Unsupervised Learning – It is required when there is no example data set with known answers. – For example, searching for a hidden pattern. In this case, clustering i.e. dividing a set of elements into groups according to some unknown pattern is carried out based on the existing data sets present. • Reinforcement Learning – This strategy is built on observation. – In this method, the ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights to be able to make a different required decision the next time.
  • 18. Applications of ANN • Classification - where the aim is to predict the class of an input vector; • Pattern matching - where the aim is to produce a pattern best associated with a given input vector; • Pattern completion - where the aim is to complete the missing parts of a given input vector; • Optimization-where the aim is to find the optimal values of parameters in an optimization problem; • Control - where, given an input vector, an appropriate action is suggested; • Function approximation/times series modeling - where the aim is to learn the functional relationships between input and desired output vectors; • Data mining - with the aim of discovering hidden patterns from data – also referred to as knowledge discovery.