SlideShare a Scribd company logo
1
Perceptron and Neural Networks
Shaik Nagur Shareef
Dept. of CSE
Vignan’s University
Output Values
Input Signals (External Stimuli)
Contents
Introduction
Neural Networks
Perceptron and Examples
Types of NN
Applications
January 20, 2019 2Shaik Nagur Shareef
Human Information processing system
 Highly complex, nonlinear, and parallel
computer.
 Has the capability to organize its structural
constituents, known as neurons, to perform
certain computations.
 Pattern recognition, Perception, and Motor
control.
 Many times faster than the fastest digital
computer in existence today
 Neurons are information-processing units in
the human brain.
January 20, 2019 3Shaik Nagur Shareef
Interconnected Neurons forms a neural (nerve) net
Human Nervous System
 Viewed as a Three-stage system.
 Central to the system is the brain, represented by the neural (nerve) net, which
continually receives information, perceives it, and makes appropriate decisions.
 The arrows pointing from pointing from left to right indicate the forward transmission.
 The arrows pointing from right to left signify the presence of feedback in the system.
 The receptors convert stimuli from the human body or the external environment into
electrical impulses that convey information to the neural net.
 The effectors convert electrical impulses generated by the neural net into discernible
responses as system outputs.
January 20, 2019 Shaik Nagur Shareef 4
Machine Information Processing System
 Neural Networks are made of Artificial Neurons,
 A Neural Network is a machine that is designed to model the way in which the brain
performs a particular task or function of interest.
 The network is usually implemented by using electronic components or is simulated in
software on a digital computer.
 Neural Networks perform useful computations through a process of learning.
January 20, 2019 Shaik Nagur Shareef 5
Biological Neuron vs Machine Neuron
January 20, 2019 Shaik Nagur Shareef 6
Neural Network
 A Neural Network is a massively parallel distributed processor made up of simple
processing units that has a natural propensity for storing experiential knowledge and
making it available for use.
 It resembles the brain in two respects:
1. Knowledge is acquired by the network from its environment through a learning process.
2. Interneuron connection strengths, known as synaptic weights, are used to store the
acquired knowledge.
 The procedure used to perform the learning process is called a learning algorithm, the
function of which is to modify the synaptic weights of the network in an orderly fashion
to attain a desired design objective.
 The modification of synaptic weights provides the traditional method for the design of
neural networks.
January 20, 2019 Shaik Nagur Shareef 7
Neural Network
January 20, 2019 Shaik Nagur Shareef 8
Artificial Neuron - [McPi43]
 In 1943, Warren McCulloch and Walter Pitts introduced one of the first artificial
neurons [McPi43].
 The main feature of their neuron model is that a weighted sum of input signals is
compared to a threshold to determine the neuron output.
 When the sum is greater than or equal to the threshold, the output is 1.
 When the sum is less than the threshold, the output is 0.
 Networks of these neurons in principle, compute any arithmetic or logical function.
 The parameters of [McPi43] networks had to be designed, as no training method was
available.
January 20, 2019 Shaik Nagur Shareef 9
Perceptron- [Rose58]
 In the late 1950s, Frank Rosenblatt and several other researchers developed a class of
neural networks called perceptrons.
 The neurons in these networks were similar to those of McCulloch and Pitts.
 Rosenblatt's key contribution was the introduction of a learning rule for training
perceptron networks to solve pattern recognition problems [Rose58].
 He proved that his learning rule will always converge to the correct network weights, if
weights exist that solve the problem.
 Learning was simple and automatic.
 Examples of proper behavior were presented to the network, which learned from its
mistakes.
 The perceptron could even learn when initialized with random values for its weights
and biases.
January 20, 2019 Shaik Nagur Shareef 10
Learning Rule
 It is a Procedure for modifying the weights and biases of a network.
 This procedure may also be referred to as a training algorithm.
 The purpose of the learning rule is to train the network to perform some task.
 There are many types of neural network learning rules.
 They fall into three broad categories:
1. Supervised learning
2. Unsupervised learning
3. Reinforcement (or graded) learning
January 20, 2019 Shaik Nagur Shareef 11
Supervised Learning
 In supervised learning, the learning rule is provided with a set of examples (the training
set) of proper network behavior:
 Here pq is an input to the network and tq is the corresponding correct (target) output.
 As the inputs are applied to the network, the network outputs are compared to the
targets.
 The learning rule is then used to adjust the weights and biases of the network in order to
move the network outputs closer to the targets.
 The perceptron learning rule falls in this supervised learning category.
January 20, 2019 Shaik Nagur Shareef 12
Unsupervised Learning
 In unsupervised learning, the weights and biases are modified in response to network
inputs only.
 There are no target outputs available.
 At first glance this might seem to be impractical.
 How can you train a network if you don't know what it is supposed to do?
 Most of these algorithms perform some kind of clustering operation.
 They learn to categorize the input patterns into a finite number of classes.
 This is especially useful in such applications as vector quantization.
January 20, 2019 Shaik Nagur Shareef 13
Reinforcement Learning
 Reinforcement learning is similar to supervised learning, except that, instead of being
provided with the correct output for each network input, the algorithm is only given a
grade.
 The grade (or score) is a measure of the network performance over some sequence of
inputs.
 This type of learning is currently much less common than supervised learning.
 It appears to be most suited to control system applications
January 20, 2019 Shaik Nagur Shareef 14
Perceptron Architecture
January 20, 2019 Shaik Nagur Shareef 15
Procedure
 consider the network weight matrix


January 20, 2019 Shaik Nagur Shareef 16
Activation Function(hardlim Function)
January 20, 2019 Shaik Nagur Shareef 17
Step Function Sigmoid Function
Single-Neuron Perceptron-Example

 Assign Values to weights and bias

January 20, 2019 Shaik Nagur Shareef 18
Single-Neuron Perceptron-Example

 Therefore, the network output will be 1 for the region above and to the
right of the decision boundary.
January 20, 2019 Shaik Nagur Shareef 19
Single-Neuron Perceptron-Example
January 20, 2019 Shaik Nagur Shareef 20
Logic Function: AND gate


January 20, 2019 Shaik Nagur Shareef 21
Weight Update Rule
January 20, 2019 Shaik Nagur Shareef 22
Weight Update Rule
January 20, 2019 Shaik Nagur Shareef 23
Training Algorithms
 Adjust neural network weights to map inputs to outputs.
 Use a set of sample patterns where the desired output (given the inputs presented) is
known.
 The purpose is to learn to generalize
Recognize features which are common to good and bad exemplars
January 20, 2019 Shaik Nagur Shareef 24
Training: Key Terms
 Epoch: Presentation of the entire training set to the neural network.
 In the case of the AND function an epoch consists of four sets of inputs being presented
to the network (i.e. [0,0], [0,1], [1,0], [1,1])
 Error: The error value is the amount by which the value output by the network differs
from the target value.
 For example, if we required the network to output 0 and it output a 1, then Error = -1
 Online training: Update weights after each sample
 Offline (batch training): Compute error over all samples
Then update weights
 Training: Backpropagation procedure
Gradient descent strategy (usual problems)
 Prediction: Compute outputs based on input vector & weights
January 20, 2019 Shaik Nagur Shareef 25
Gradient Descent Concept
 Error: Sum of squares error of inputs with current weights
 Compute rate of change of error w.r.t each weight
Which weights have greatest effect on error?
Effectively, partial derivatives of error w.r.t weights
In turn, depend on other weights => chain rule
 E = G(w)
Error as function of weights
 Find rate of change of error
Follow steepest rate of change
Change weights so that error is minimized
January 20, 2019 Shaik Nagur Shareef 26
Gradient Descent Algorithm
Gradient-Descent(training_examples, )
Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is the vector
of input values, and t is the target output value,  is the learning rate (e.g. 0.1)
 Initialize each wi to some small random value
 Until the termination condition is met, Do
Initialize each wi to zero
For each <(x1,…xn),t> in training_examples Do
Input the instance (x1,…,xn) to the linear unit and compute the output o
For each linear unit weight wi Do
 wi= wi +  (t-o) xi
For each linear unit weight wi Do
wi=wi+wi
January 20, 2019 Shaik Nagur Shareef 27
Gradient Descent Algorithm
January 20, 2019 Shaik Nagur Shareef 28
(w1,w2)
(w1+w1,w2 +w2)
Back-Propagation
 A training procedure which allows multi-layer feedforward Neural Networks to be
trained;
 Can theoretically perform “any” input-output mapping;
 Can learn to solve linearly inseparable problems.
 For feed-forward networks:
A continuous function can be differentiated allowing gradient-descent.
Back-propagation is an example of a gradient-descent technique.
January 20, 2019 Shaik Nagur Shareef 29
jiw  kjw 
iy jyi j k
j k
Back-Propagation Algorithm
 Initialize each wi to some small random value
 Until the termination condition is met, Do
For each training example <(x1,…xn),t> Do
Input the instance (x1,…,xn) to the network and compute the network outputs ok
For each output unit k
k=ok(1-ok)(tk-ok)
For each hidden unit h
h=oh(1-oh) k wh,k k
For each network weight w,j Do
wi,j=wi,j+wi,j where
wi,j=  j xi,j
January 20, 2019 Shaik Nagur Shareef 30
Types of Neural Networks
January 20, 2019 Shaik Nagur Shareef 31
Standard NN Recurrent NNConvolutional NN
Benefits of Neural Networks
 Nonlinearity: An artificial neuron can be linear or nonlinear.
 Input–Output Mapping: A popular paradigm of learning, called learning with a
teacher, or supervised learning, involves modification of the synaptic weights of a
neural network by applying a set of labelled training examples, or task examples.
 Adaptivity: Neural networks have a built-in capability to adapt their synaptic weights
to changes in the surrounding environment.
 Evidential Response: In the context of pattern classification, a neural network can be
designed to provide information not only about which particular pattern to select, but
also about the confidence in the decision made.
 Fault Tolerance: A neural network, implemented in hardware form, has the potential
to be inherently fault tolerant, or capable of robust computation, in the sense that its
performance degrades gracefully under adverse operating conditions.
January 20, 2019 Shaik Nagur Shareef 32
Applications
Natural language Processing
Optical Character Recognition
Speech recognition
Neural Machine Translation
Video Classification
Emotion Recognition
Face Recognition
Object Detection
Image Classification
January 20, 2019 Shaik Nagur Shareef 33
34
Thank You
Any Questions..?

More Related Content

PPSX
Perceptron (neural network)
PPTX
Neural network & its applications
PPTX
Artificial neural network
PPTX
Multilayer perceptron
PPTX
03 Single layer Perception Classifier
PPTX
Introducción a las redes convolucionales
PPTX
Genetic Algorithm
PPTX
neural network
Perceptron (neural network)
Neural network & its applications
Artificial neural network
Multilayer perceptron
03 Single layer Perception Classifier
Introducción a las redes convolucionales
Genetic Algorithm
neural network

What's hot (20)

PPT
Perceptron
ODP
Machine Learning with Decision trees
PDF
Artificial Neural Networks Lect3: Neural Network Learning rules
PPT
Artificial neural network
PPT
Artificial Neural Networks - ANN
PPTX
Unsupervised learning
PDF
Introduction to Neural Networks
PDF
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
PPT
Introduction to soft computing
PPTX
Activation function
PPTX
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
PDF
Introduction to Recurrent Neural Network
PDF
Machine Learning: Introduction to Neural Networks
PDF
Artificial neural networks
PPTX
04 Multi-layer Feedforward Networks
PPT
2.5 backpropagation
ODP
Artificial Neural Network
PPTX
Feedforward neural network
PPT
Neural network final NWU 4.3 Graphics Course
PPTX
Neural network
Perceptron
Machine Learning with Decision trees
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial neural network
Artificial Neural Networks - ANN
Unsupervised learning
Introduction to Neural Networks
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Introduction to soft computing
Activation function
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
Introduction to Recurrent Neural Network
Machine Learning: Introduction to Neural Networks
Artificial neural networks
04 Multi-layer Feedforward Networks
2.5 backpropagation
Artificial Neural Network
Feedforward neural network
Neural network final NWU 4.3 Graphics Course
Neural network
Ad

Similar to Perceptron & Neural Networks (20)

PDF
Fuzzy Logic Final Report
PDF
B42010712
PDF
Survey on Artificial Neural Network Learning Technique Algorithms
PDF
Cognitive Science Unit 4
PPT
Neural Networks
PPTX
Introduction Of Artificial neural network
PDF
APPLYING NEURAL NETWORKS FOR SUPERVISED LEARNING OF MEDICAL DATA
PPTX
Artificial neural networks
PDF
Neural Network Based Individual Classification System
DOCX
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Artificial neural network in Audiology.pdf
PPTX
Artificial Neural Networks ppt.pptx for final sem cse
PDF
A04401001013
PDF
Deep Learning detailkesdSECA4002 doc.pdf
PPTX
Artifical Neural Network and its applications
PPT
Supervised Learning
PDF
Image Recognition With the Help of Auto-Associative Neural Network
PDF
Neural network based numerical digits recognization using nnt in matlab
PPTX
Artificial neural network
PDF
Neural networks are parallel computing devices.docx.pdf
Fuzzy Logic Final Report
B42010712
Survey on Artificial Neural Network Learning Technique Algorithms
Cognitive Science Unit 4
Neural Networks
Introduction Of Artificial neural network
APPLYING NEURAL NETWORKS FOR SUPERVISED LEARNING OF MEDICAL DATA
Artificial neural networks
Neural Network Based Individual Classification System
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Artificial neural network in Audiology.pdf
Artificial Neural Networks ppt.pptx for final sem cse
A04401001013
Deep Learning detailkesdSECA4002 doc.pdf
Artifical Neural Network and its applications
Supervised Learning
Image Recognition With the Help of Auto-Associative Neural Network
Neural network based numerical digits recognization using nnt in matlab
Artificial neural network
Neural networks are parallel computing devices.docx.pdf
Ad

More from NAGUR SHAREEF SHAIK (10)

PPTX
Theories on moral autonomy
PPTX
Internet of things(IoT)
PPTX
Artificial Intelligence
PPTX
Control statements
PPTX
Quantum Computers
PPTX
Biodiversity and its conservation
PPTX
PHOTONIC CRYSTALS
PPTX
Gravitational waves
PPTX
WASTE WATER TREATMENT
PPTX
Quantum computers
Theories on moral autonomy
Internet of things(IoT)
Artificial Intelligence
Control statements
Quantum Computers
Biodiversity and its conservation
PHOTONIC CRYSTALS
Gravitational waves
WASTE WATER TREATMENT
Quantum computers

Recently uploaded (20)

PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
Sustainable Sites - Green Building Construction
PPTX
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Structs to JSON How Go Powers REST APIs.pdf
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
Welding lecture in detail for understanding
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
ETO & MEO Certificate of Competency Questions and Answers
PPTX
additive manufacturing of ss316l using mig welding
PPT
Drone Technology Electronics components_1
PPTX
web development for engineering and engineering
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Lecture Notes Electrical Wiring System Components
Internet of Things (IOT) - A guide to understanding
Sustainable Sites - Green Building Construction
Unit 5 BSP.pptxytrrftyyydfyujfttyczcgvcd
OOP with Java - Java Introduction (Basics)
Structs to JSON How Go Powers REST APIs.pdf
Arduino robotics embedded978-1-4302-3184-4.pdf
Welding lecture in detail for understanding
UNIT-1 - COAL BASED THERMAL POWER PLANTS
ETO & MEO Certificate of Competency Questions and Answers
additive manufacturing of ss316l using mig welding
Drone Technology Electronics components_1
web development for engineering and engineering
CYBER-CRIMES AND SECURITY A guide to understanding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Mechanical Engineering MATERIALS Selection
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Embodied AI: Ushering in the Next Era of Intelligent Systems

Perceptron & Neural Networks

  • 1. 1 Perceptron and Neural Networks Shaik Nagur Shareef Dept. of CSE Vignan’s University Output Values Input Signals (External Stimuli)
  • 2. Contents Introduction Neural Networks Perceptron and Examples Types of NN Applications January 20, 2019 2Shaik Nagur Shareef
  • 3. Human Information processing system  Highly complex, nonlinear, and parallel computer.  Has the capability to organize its structural constituents, known as neurons, to perform certain computations.  Pattern recognition, Perception, and Motor control.  Many times faster than the fastest digital computer in existence today  Neurons are information-processing units in the human brain. January 20, 2019 3Shaik Nagur Shareef Interconnected Neurons forms a neural (nerve) net
  • 4. Human Nervous System  Viewed as a Three-stage system.  Central to the system is the brain, represented by the neural (nerve) net, which continually receives information, perceives it, and makes appropriate decisions.  The arrows pointing from pointing from left to right indicate the forward transmission.  The arrows pointing from right to left signify the presence of feedback in the system.  The receptors convert stimuli from the human body or the external environment into electrical impulses that convey information to the neural net.  The effectors convert electrical impulses generated by the neural net into discernible responses as system outputs. January 20, 2019 Shaik Nagur Shareef 4
  • 5. Machine Information Processing System  Neural Networks are made of Artificial Neurons,  A Neural Network is a machine that is designed to model the way in which the brain performs a particular task or function of interest.  The network is usually implemented by using electronic components or is simulated in software on a digital computer.  Neural Networks perform useful computations through a process of learning. January 20, 2019 Shaik Nagur Shareef 5
  • 6. Biological Neuron vs Machine Neuron January 20, 2019 Shaik Nagur Shareef 6
  • 7. Neural Network  A Neural Network is a massively parallel distributed processor made up of simple processing units that has a natural propensity for storing experiential knowledge and making it available for use.  It resembles the brain in two respects: 1. Knowledge is acquired by the network from its environment through a learning process. 2. Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.  The procedure used to perform the learning process is called a learning algorithm, the function of which is to modify the synaptic weights of the network in an orderly fashion to attain a desired design objective.  The modification of synaptic weights provides the traditional method for the design of neural networks. January 20, 2019 Shaik Nagur Shareef 7
  • 8. Neural Network January 20, 2019 Shaik Nagur Shareef 8
  • 9. Artificial Neuron - [McPi43]  In 1943, Warren McCulloch and Walter Pitts introduced one of the first artificial neurons [McPi43].  The main feature of their neuron model is that a weighted sum of input signals is compared to a threshold to determine the neuron output.  When the sum is greater than or equal to the threshold, the output is 1.  When the sum is less than the threshold, the output is 0.  Networks of these neurons in principle, compute any arithmetic or logical function.  The parameters of [McPi43] networks had to be designed, as no training method was available. January 20, 2019 Shaik Nagur Shareef 9
  • 10. Perceptron- [Rose58]  In the late 1950s, Frank Rosenblatt and several other researchers developed a class of neural networks called perceptrons.  The neurons in these networks were similar to those of McCulloch and Pitts.  Rosenblatt's key contribution was the introduction of a learning rule for training perceptron networks to solve pattern recognition problems [Rose58].  He proved that his learning rule will always converge to the correct network weights, if weights exist that solve the problem.  Learning was simple and automatic.  Examples of proper behavior were presented to the network, which learned from its mistakes.  The perceptron could even learn when initialized with random values for its weights and biases. January 20, 2019 Shaik Nagur Shareef 10
  • 11. Learning Rule  It is a Procedure for modifying the weights and biases of a network.  This procedure may also be referred to as a training algorithm.  The purpose of the learning rule is to train the network to perform some task.  There are many types of neural network learning rules.  They fall into three broad categories: 1. Supervised learning 2. Unsupervised learning 3. Reinforcement (or graded) learning January 20, 2019 Shaik Nagur Shareef 11
  • 12. Supervised Learning  In supervised learning, the learning rule is provided with a set of examples (the training set) of proper network behavior:  Here pq is an input to the network and tq is the corresponding correct (target) output.  As the inputs are applied to the network, the network outputs are compared to the targets.  The learning rule is then used to adjust the weights and biases of the network in order to move the network outputs closer to the targets.  The perceptron learning rule falls in this supervised learning category. January 20, 2019 Shaik Nagur Shareef 12
  • 13. Unsupervised Learning  In unsupervised learning, the weights and biases are modified in response to network inputs only.  There are no target outputs available.  At first glance this might seem to be impractical.  How can you train a network if you don't know what it is supposed to do?  Most of these algorithms perform some kind of clustering operation.  They learn to categorize the input patterns into a finite number of classes.  This is especially useful in such applications as vector quantization. January 20, 2019 Shaik Nagur Shareef 13
  • 14. Reinforcement Learning  Reinforcement learning is similar to supervised learning, except that, instead of being provided with the correct output for each network input, the algorithm is only given a grade.  The grade (or score) is a measure of the network performance over some sequence of inputs.  This type of learning is currently much less common than supervised learning.  It appears to be most suited to control system applications January 20, 2019 Shaik Nagur Shareef 14
  • 15. Perceptron Architecture January 20, 2019 Shaik Nagur Shareef 15
  • 16. Procedure  consider the network weight matrix   January 20, 2019 Shaik Nagur Shareef 16
  • 17. Activation Function(hardlim Function) January 20, 2019 Shaik Nagur Shareef 17 Step Function Sigmoid Function
  • 18. Single-Neuron Perceptron-Example   Assign Values to weights and bias  January 20, 2019 Shaik Nagur Shareef 18
  • 19. Single-Neuron Perceptron-Example   Therefore, the network output will be 1 for the region above and to the right of the decision boundary. January 20, 2019 Shaik Nagur Shareef 19
  • 20. Single-Neuron Perceptron-Example January 20, 2019 Shaik Nagur Shareef 20
  • 21. Logic Function: AND gate   January 20, 2019 Shaik Nagur Shareef 21
  • 22. Weight Update Rule January 20, 2019 Shaik Nagur Shareef 22
  • 23. Weight Update Rule January 20, 2019 Shaik Nagur Shareef 23
  • 24. Training Algorithms  Adjust neural network weights to map inputs to outputs.  Use a set of sample patterns where the desired output (given the inputs presented) is known.  The purpose is to learn to generalize Recognize features which are common to good and bad exemplars January 20, 2019 Shaik Nagur Shareef 24
  • 25. Training: Key Terms  Epoch: Presentation of the entire training set to the neural network.  In the case of the AND function an epoch consists of four sets of inputs being presented to the network (i.e. [0,0], [0,1], [1,0], [1,1])  Error: The error value is the amount by which the value output by the network differs from the target value.  For example, if we required the network to output 0 and it output a 1, then Error = -1  Online training: Update weights after each sample  Offline (batch training): Compute error over all samples Then update weights  Training: Backpropagation procedure Gradient descent strategy (usual problems)  Prediction: Compute outputs based on input vector & weights January 20, 2019 Shaik Nagur Shareef 25
  • 26. Gradient Descent Concept  Error: Sum of squares error of inputs with current weights  Compute rate of change of error w.r.t each weight Which weights have greatest effect on error? Effectively, partial derivatives of error w.r.t weights In turn, depend on other weights => chain rule  E = G(w) Error as function of weights  Find rate of change of error Follow steepest rate of change Change weights so that error is minimized January 20, 2019 Shaik Nagur Shareef 26
  • 27. Gradient Descent Algorithm Gradient-Descent(training_examples, ) Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is the vector of input values, and t is the target output value,  is the learning rate (e.g. 0.1)  Initialize each wi to some small random value  Until the termination condition is met, Do Initialize each wi to zero For each <(x1,…xn),t> in training_examples Do Input the instance (x1,…,xn) to the linear unit and compute the output o For each linear unit weight wi Do  wi= wi +  (t-o) xi For each linear unit weight wi Do wi=wi+wi January 20, 2019 Shaik Nagur Shareef 27
  • 28. Gradient Descent Algorithm January 20, 2019 Shaik Nagur Shareef 28 (w1,w2) (w1+w1,w2 +w2)
  • 29. Back-Propagation  A training procedure which allows multi-layer feedforward Neural Networks to be trained;  Can theoretically perform “any” input-output mapping;  Can learn to solve linearly inseparable problems.  For feed-forward networks: A continuous function can be differentiated allowing gradient-descent. Back-propagation is an example of a gradient-descent technique. January 20, 2019 Shaik Nagur Shareef 29 jiw  kjw  iy jyi j k j k
  • 30. Back-Propagation Algorithm  Initialize each wi to some small random value  Until the termination condition is met, Do For each training example <(x1,…xn),t> Do Input the instance (x1,…,xn) to the network and compute the network outputs ok For each output unit k k=ok(1-ok)(tk-ok) For each hidden unit h h=oh(1-oh) k wh,k k For each network weight w,j Do wi,j=wi,j+wi,j where wi,j=  j xi,j January 20, 2019 Shaik Nagur Shareef 30
  • 31. Types of Neural Networks January 20, 2019 Shaik Nagur Shareef 31 Standard NN Recurrent NNConvolutional NN
  • 32. Benefits of Neural Networks  Nonlinearity: An artificial neuron can be linear or nonlinear.  Input–Output Mapping: A popular paradigm of learning, called learning with a teacher, or supervised learning, involves modification of the synaptic weights of a neural network by applying a set of labelled training examples, or task examples.  Adaptivity: Neural networks have a built-in capability to adapt their synaptic weights to changes in the surrounding environment.  Evidential Response: In the context of pattern classification, a neural network can be designed to provide information not only about which particular pattern to select, but also about the confidence in the decision made.  Fault Tolerance: A neural network, implemented in hardware form, has the potential to be inherently fault tolerant, or capable of robust computation, in the sense that its performance degrades gracefully under adverse operating conditions. January 20, 2019 Shaik Nagur Shareef 32
  • 33. Applications Natural language Processing Optical Character Recognition Speech recognition Neural Machine Translation Video Classification Emotion Recognition Face Recognition Object Detection Image Classification January 20, 2019 Shaik Nagur Shareef 33