SlideShare a Scribd company logo
UNIT II
Neural Networks
Course :Machine Learning
By: Dr P Indira priyadarsini B.Tech,M.Tech,Ph.D
3/4/2022
DEPARTMENT OF Computer science and Engineering 1
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 2
MULTI LAYERED FEED FORWARD
NEURAL NETWORK ARCHITECTURES
Multilayer networks solve the classification problem for non linear sets by
employing hidden layers, whose neurons are not directly connected to the output.
The additional hidden layers can be interpreted geometrically as additional hyper-
planes, which enhance the separation capacity of the network. Figure 2.2 shows
typical multilayer network architectures.
This new architecture introduces a new question: how to train the hidden units for
which the desired output is not known. The Back propagation algorithm offers a
solution to this problem.
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 3
Input Nodes – The Input nodes provide information from the outside world to the
network and are together referred to as the “Input Layer”. No computation is
performed in any of the Input nodes – they just pass on the information to the hidden
nodes.
Hidden Nodes – The Hidden nodes have no direct connection with the outside world
(hence the name “hidden”). They perform computations and transfer information
from the input nodes to the output nodes. A collection of hidden nodes forms
a “Hidden Layer”.
While a feedforward network will only have a single input layer and a single output
layer, it can have zero or multiple Hidden Layers.
Output Nodes – The Output nodes are collectively referred to as the “Output Layer”
and are responsible for computations and transferring information from the network
to the outside world.
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 4
The training occurs in a supervised style. The basic idea is to present the input vector
to the network;
calculate in the forward direction the output of each layer and the final output of the
network.
For the output layer the desired values are known and therefore the weights can be
adjusted as for a single layer network; in the case of the BP algorithm according to the
gradient decent rule.
To calculate the weight changes in the hidden layer the error in the output layer is
back-propagated to these layers according to the connecting weights.
This process is repeated for each sample in the training set. One cycle through the
training set is called an epoch.
The number of epochs needed to train the network depends on various parameters,
especially on the error calculated in the output layer.
The following description of the Back propagation algorithm is based on the
descriptions in [rume86],[faus94], and [patt96].
The assumed architecture is depicted in Figure 2.3. The input vector has n dimensions,
the output vector has m dimensions, the bias (the used constant input) is -1, there is
one hidden layer with g neurons. The matrix V holds the weights of the neurons in the
hidden layer. The matrix W defines the weights of the neurons in the output layer. The
learning parameter is η, and the momentum is α.
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 5
• In machine learning, gradient descent and back propagation often appear at the
same time, and sometimes they can replace each other.
• Back propagation can be considered as a subset of gradient descent, which is the
implementation of gradient descent in multi-layer neural networks.
• Back propagation, also named the Generalized Delta Rule, is an algorithm used in
the training of ANNs for supervised learning. (generalizations exist for other
artificial neural networks) It efficiently computes the gradient of the error
function with respect to the weights of the network for a single input-output
example.
• This makes it feasible to use gradient methods for training multi-layer networks,
updating the weights to minimize loss.
• Since the same training rule is applied recursively for each layer of the neural
network, we can calculate the contribution of each weight to the total error
inversely from the output layer to the input layer.
BACKPROPAGATION
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 6
Back Propagation Neural Networks Continued….
• Back Propagation Neural Network is a multilayer neural network consisting of
the input layer, at least one hidden layer and output layer.
• As its name suggests, back propagating will take place in this network.
• The error which is calculated at the output layer, by comparing the target output
and the actual output, will be propagated back towards the input layer.
Architecture
• As shown in the diagram, the architecture of BPN has three interconnected
layers having weights on them.
• The hidden layer as well as the output layer also has bias, whose weight is
always 1, on them.
• As is clear from the diagram, the working of BPN is in two phases.
• One phase sends the signal from the input layer to the output layer, and the
other phase back propagates the error from the output layer to the input layer.
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 7
• Simply put, we’re propagating the total error backward through the
connections in the network layer by layer, calculate the contribution
(gradient) of each weight and bias to the total error in every layer, then use
gradient descent algorithm to optimize the weights and biases, and
eventually minimize the total error of the neural network.
• The back propagation algorithm has two phases:
• Forward pass phase: feed-forward propagation of input pattern signals
through the network, from inputs towards the network outputs.
• Backward pass phase: computes ‘error signal’ – propagation of error
(difference between actual and desired output values) backwards through
network, starting form output units towards the input units.
• Visualizing this can help understand how the backpropagation algorithm
works step by step
BACKPROPAGATION (CONTINUED…. )
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 8
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 9
3/4/2022 9
The back-propagation training algorithm
Step 1: Initialization
Set all the weights and threshold levels of the
network to random numbers uniformly
distributed inside a small range:
where Fi is the total number of inputs of neuron i
in the network. The weight initialization is done
on a neuron-by-neuron basis.










i
i F
F
4
.
2
,
4
.
2
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY
10
Step 2: Activation
Activate the back-propagation neural network
by applying inputs x1(p), x2(p),…, xn(p) and
desired outputs yd,1(p), yd,2(p),…, yd,n(p).
(a) Calculate the actual outputs of the neurons
in the hidden layer:
where n is the number of inputs of neuron j in
the hidden layer, and sigmoid is the sigmoid
activation function.











 

j
n
i
ij
i
j p
w
p
x
sigm
oid
p
y
1
)
(
)
(
)
(
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY
11
(b) Calculate the actual outputs of the neurons
in the output layer:
Step 2 : Activation (continued)
where m is the number of inputs of neuron k in
the output layer.











 

k
m
j
jk
jk
k p
w
p
x
sigm
oid
p
y
1
)
(
)
(
)
(
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 12
12
Step 3: Weight training
Update the weights in the back-propagation
network propagating backward the errors
associated with output neurons.
(a) Calculate the error gradient for the neurons in
the output layer:
where
Calculate the weight corrections:
Update the weights at the output neurons:
)
(
)
(
)
1
( p
w
p
w
p
w jk
jk
jk 



)
(
)
(
1
)
(
)
( p
e
p
y
p
y
p k
k
k
k 



)
(
)
(
)
( , p
y
p
y
p
e k
k
d
k 

)
(
)
(
)
( p
p
y
p
w k
j
jk 



DEPARTMENT OF INFORMATION TECHNOLOGY
13
(b) Calculate the error gradient for the neurons
in the hidden layer:
Step 3: Weight training (continued)
Calculate the weight corrections:
Update the weights at the hidden neurons:
)
(
)
(
)
(
1
)
(
)
(
1
]
[ p
w
p
p
y
p
y
p jk
l
k
k
j
j
j 





)
(
)
(
)
( p
p
x
p
w j
i
ij 



)
(
)
(
)
1
( p
w
p
w
p
w ij
ij
ij 



3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 14
3/4/2022 14
Step 4: Iteration
Increase iteration p by one, go
back to Step 2 and repeat the process until the
selected error criterion is satisfied.
As an example, we may consider the three-layer
back-propagation network. Suppose that the
network is required to perform logical operation
Exclusive-OR. Recall that a single-layer
perceptron could not do this operation. Now we
will apply the three-layer net.
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 15
Working Example
Back propagation (mynotes)
3/4/2022
DEPARTMENT OF INFORMATION TECHNOLOGY 16
APPLICATIONS OF FEEDFORWARD NEURAL
NETWORKS
There are a wide variety of applications of neural networks to real world problems.
1. Gene Expression Profiling for predicting Clinical Outcomes in cancer patients.
2. Steering an Autonomous Vehicle.
3. Call admission control in ATM Networks.
4. Robot Arm Control and Navigation.

More Related Content

PPTX
Unit iii update
PPTX
Neural networks
PPT
Classification using back propagation algorithm
PDF
Classification by back propagation, multi layered feed forward neural network...
PDF
Classification By Back Propagation
PPTX
Artificial neural network
PDF
Intoduction to Neural Network
PPTX
Unit iii update
Neural networks
Classification using back propagation algorithm
Classification by back propagation, multi layered feed forward neural network...
Classification By Back Propagation
Artificial neural network
Intoduction to Neural Network

What's hot (20)

PPT
Nural network ER. Abhishek k. upadhyay
PPT
2.5 backpropagation
PPTX
04 Multi-layer Feedforward Networks
PDF
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
PPT
Back propagation
PDF
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
PDF
Artificial Neural Networks Lect3: Neural Network Learning rules
PPTX
ARTIFICIAL NEURAL NETWORKS
PPT
Ann
PDF
03 neural network
PPTX
Back propagation method
PPT
Principles of soft computing-Associative memory networks
PPT
Artificial Neural Networks
PDF
15 Machine Learning Multilayer Perceptron
PPTX
Perceptron & Neural Networks
PPT
Adaline madaline
PPT
MPerceptron
PPT
Character Recognition using Artificial Neural Networks
PDF
Fundamental, An Introduction to Neural Networks
PPT
Artificial neural network model & hidden layers in multilayer artificial neur...
Nural network ER. Abhishek k. upadhyay
2.5 backpropagation
04 Multi-layer Feedforward Networks
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...
Back propagation
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Artificial Neural Networks Lect3: Neural Network Learning rules
ARTIFICIAL NEURAL NETWORKS
Ann
03 neural network
Back propagation method
Principles of soft computing-Associative memory networks
Artificial Neural Networks
15 Machine Learning Multilayer Perceptron
Perceptron & Neural Networks
Adaline madaline
MPerceptron
Character Recognition using Artificial Neural Networks
Fundamental, An Introduction to Neural Networks
Artificial neural network model & hidden layers in multilayer artificial neur...
Ad

Similar to Unit ii supervised ii (20)

PPTX
Feed forward back propogation algorithm .pptx
PPT
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
PDF
Mlp trainning algorithm
PPT
Lec 6-bp
PPT
Back propagation
PDF
Chapter3 bp
PPTX
10 Backpropagation Algorithm for Neural Networks (1).pptx
PPT
nural network ER. Abhishek k. upadhyay
DOCX
Backpropagation
PDF
PPT
this is a Ai topic neural network ML_Lecture_4.ppt
PPTX
Machine Learning DR PRKRao-PPT UNIT-II.pptx
PPTX
Deep neural networks & computational graphs
PDF
Nural Network ppt presentation which help about nural
PPT
Intro to Deep learning - Autoencoders
PPTX
back propagation1_presenation_lab 6.pptx
PPT
lecture07.ppt
PPTX
employed to cover the tampering traces of a tampered image. Image tampering
Feed forward back propogation algorithm .pptx
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
Mlp trainning algorithm
Lec 6-bp
Back propagation
Chapter3 bp
10 Backpropagation Algorithm for Neural Networks (1).pptx
nural network ER. Abhishek k. upadhyay
Backpropagation
this is a Ai topic neural network ML_Lecture_4.ppt
Machine Learning DR PRKRao-PPT UNIT-II.pptx
Deep neural networks & computational graphs
Nural Network ppt presentation which help about nural
Intro to Deep learning - Autoencoders
back propagation1_presenation_lab 6.pptx
lecture07.ppt
employed to cover the tampering traces of a tampered image. Image tampering
Ad

Recently uploaded (20)

PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Cell Structure & Organelles in detailed.
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
Cell Types and Its function , kingdom of life
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Lesson notes of climatology university.
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Anesthesia in Laparoscopic Surgery in India
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Cell Structure & Organelles in detailed.
Microbial diseases, their pathogenesis and prophylaxis
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Final Presentation General Medicine 03-08-2024.pptx
2.FourierTransform-ShortQuestionswithAnswers.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Cell Types and Its function , kingdom of life
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Microbial disease of the cardiovascular and lymphatic systems
Orientation - ARALprogram of Deped to the Parents.pptx
Final Presentation General Medicine 03-08-2024.pptx
Lesson notes of climatology university.
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
FourierSeries-QuestionsWithAnswers(Part-A).pdf
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf

Unit ii supervised ii

  • 1. UNIT II Neural Networks Course :Machine Learning By: Dr P Indira priyadarsini B.Tech,M.Tech,Ph.D 3/4/2022 DEPARTMENT OF Computer science and Engineering 1
  • 2. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 2 MULTI LAYERED FEED FORWARD NEURAL NETWORK ARCHITECTURES Multilayer networks solve the classification problem for non linear sets by employing hidden layers, whose neurons are not directly connected to the output. The additional hidden layers can be interpreted geometrically as additional hyper- planes, which enhance the separation capacity of the network. Figure 2.2 shows typical multilayer network architectures. This new architecture introduces a new question: how to train the hidden units for which the desired output is not known. The Back propagation algorithm offers a solution to this problem.
  • 3. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 3 Input Nodes – The Input nodes provide information from the outside world to the network and are together referred to as the “Input Layer”. No computation is performed in any of the Input nodes – they just pass on the information to the hidden nodes. Hidden Nodes – The Hidden nodes have no direct connection with the outside world (hence the name “hidden”). They perform computations and transfer information from the input nodes to the output nodes. A collection of hidden nodes forms a “Hidden Layer”. While a feedforward network will only have a single input layer and a single output layer, it can have zero or multiple Hidden Layers. Output Nodes – The Output nodes are collectively referred to as the “Output Layer” and are responsible for computations and transferring information from the network to the outside world.
  • 4. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 4 The training occurs in a supervised style. The basic idea is to present the input vector to the network; calculate in the forward direction the output of each layer and the final output of the network. For the output layer the desired values are known and therefore the weights can be adjusted as for a single layer network; in the case of the BP algorithm according to the gradient decent rule. To calculate the weight changes in the hidden layer the error in the output layer is back-propagated to these layers according to the connecting weights. This process is repeated for each sample in the training set. One cycle through the training set is called an epoch. The number of epochs needed to train the network depends on various parameters, especially on the error calculated in the output layer. The following description of the Back propagation algorithm is based on the descriptions in [rume86],[faus94], and [patt96]. The assumed architecture is depicted in Figure 2.3. The input vector has n dimensions, the output vector has m dimensions, the bias (the used constant input) is -1, there is one hidden layer with g neurons. The matrix V holds the weights of the neurons in the hidden layer. The matrix W defines the weights of the neurons in the output layer. The learning parameter is η, and the momentum is α.
  • 5. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 5 • In machine learning, gradient descent and back propagation often appear at the same time, and sometimes they can replace each other. • Back propagation can be considered as a subset of gradient descent, which is the implementation of gradient descent in multi-layer neural networks. • Back propagation, also named the Generalized Delta Rule, is an algorithm used in the training of ANNs for supervised learning. (generalizations exist for other artificial neural networks) It efficiently computes the gradient of the error function with respect to the weights of the network for a single input-output example. • This makes it feasible to use gradient methods for training multi-layer networks, updating the weights to minimize loss. • Since the same training rule is applied recursively for each layer of the neural network, we can calculate the contribution of each weight to the total error inversely from the output layer to the input layer. BACKPROPAGATION
  • 6. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 6 Back Propagation Neural Networks Continued…. • Back Propagation Neural Network is a multilayer neural network consisting of the input layer, at least one hidden layer and output layer. • As its name suggests, back propagating will take place in this network. • The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer. Architecture • As shown in the diagram, the architecture of BPN has three interconnected layers having weights on them. • The hidden layer as well as the output layer also has bias, whose weight is always 1, on them. • As is clear from the diagram, the working of BPN is in two phases. • One phase sends the signal from the input layer to the output layer, and the other phase back propagates the error from the output layer to the input layer.
  • 7. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 7 • Simply put, we’re propagating the total error backward through the connections in the network layer by layer, calculate the contribution (gradient) of each weight and bias to the total error in every layer, then use gradient descent algorithm to optimize the weights and biases, and eventually minimize the total error of the neural network. • The back propagation algorithm has two phases: • Forward pass phase: feed-forward propagation of input pattern signals through the network, from inputs towards the network outputs. • Backward pass phase: computes ‘error signal’ – propagation of error (difference between actual and desired output values) backwards through network, starting form output units towards the input units. • Visualizing this can help understand how the backpropagation algorithm works step by step BACKPROPAGATION (CONTINUED…. )
  • 9. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 9 3/4/2022 9 The back-propagation training algorithm Step 1: Initialization Set all the weights and threshold levels of the network to random numbers uniformly distributed inside a small range: where Fi is the total number of inputs of neuron i in the network. The weight initialization is done on a neuron-by-neuron basis.           i i F F 4 . 2 , 4 . 2
  • 10. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 10 Step 2: Activation Activate the back-propagation neural network by applying inputs x1(p), x2(p),…, xn(p) and desired outputs yd,1(p), yd,2(p),…, yd,n(p). (a) Calculate the actual outputs of the neurons in the hidden layer: where n is the number of inputs of neuron j in the hidden layer, and sigmoid is the sigmoid activation function.               j n i ij i j p w p x sigm oid p y 1 ) ( ) ( ) (
  • 11. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 11 (b) Calculate the actual outputs of the neurons in the output layer: Step 2 : Activation (continued) where m is the number of inputs of neuron k in the output layer.               k m j jk jk k p w p x sigm oid p y 1 ) ( ) ( ) (
  • 12. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 12 12 Step 3: Weight training Update the weights in the back-propagation network propagating backward the errors associated with output neurons. (a) Calculate the error gradient for the neurons in the output layer: where Calculate the weight corrections: Update the weights at the output neurons: ) ( ) ( ) 1 ( p w p w p w jk jk jk     ) ( ) ( 1 ) ( ) ( p e p y p y p k k k k     ) ( ) ( ) ( , p y p y p e k k d k   ) ( ) ( ) ( p p y p w k j jk    
  • 13. DEPARTMENT OF INFORMATION TECHNOLOGY 13 (b) Calculate the error gradient for the neurons in the hidden layer: Step 3: Weight training (continued) Calculate the weight corrections: Update the weights at the hidden neurons: ) ( ) ( ) ( 1 ) ( ) ( 1 ] [ p w p p y p y p jk l k k j j j       ) ( ) ( ) ( p p x p w j i ij     ) ( ) ( ) 1 ( p w p w p w ij ij ij    
  • 14. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 14 3/4/2022 14 Step 4: Iteration Increase iteration p by one, go back to Step 2 and repeat the process until the selected error criterion is satisfied. As an example, we may consider the three-layer back-propagation network. Suppose that the network is required to perform logical operation Exclusive-OR. Recall that a single-layer perceptron could not do this operation. Now we will apply the three-layer net.
  • 15. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 15 Working Example Back propagation (mynotes)
  • 16. 3/4/2022 DEPARTMENT OF INFORMATION TECHNOLOGY 16 APPLICATIONS OF FEEDFORWARD NEURAL NETWORKS There are a wide variety of applications of neural networks to real world problems. 1. Gene Expression Profiling for predicting Clinical Outcomes in cancer patients. 2. Steering an Autonomous Vehicle. 3. Call admission control in ATM Networks. 4. Robot Arm Control and Navigation.

Editor's Notes

  • #6: https://adatis.co.uk/introduction-to-artificial-neural-networks-part-two-gradient-descent-backpropagation-supervised-unsupervised-learning/
  • #8: https://adatis.co.uk/introduction-to-artificial-neural-networks-part-two-gradient-descent-backpropagation-supervised-unsupervised-learning/
  • #16: from Han and Kamber (mynotes)