SlideShare a Scribd company logo
4
Most read
7
Most read
8
Most read
Artificial Neural Network
Introduction
• Motivated by the possibility of creating an artificial
computing network similar to the brain and nerve cells in
our body.
• These networks are computing systems composed of a
number of highly interconnected layers of simple neuron-
like processing elements.
• The entire network collectively performs computations,
with the knowledge represented as distributed patterns
of activity all over processing elements.
• The collective activities result in a high degree of
parallelism, which enables the network to solve complex
problems.
• The distributed representation leads to greater fault
tolerance and to graceful degradation when problems
• They have the capability of simulating non-linear
patterns.
• Their advantage relies on the fact that they demand less
time for development than traditional mathematical
models.
ANN Architecture
There are three layers:-
1. Input Layer:- The first layer of an ANN that receives the
input information in the form of various texts, numbers,
audio files, image pixels, etc.
2. Hidden Layer:- In the middle of the ANN model are
the hidden layers. There can be a single hidden layer or
multiple hidden layers. These hidden layers perform
various types of mathematical computation on the input
data and recognize the patterns that are part of.
3. Output Layer:- In the output layer, we obtain the result
that we obtain through rigorous computations performed
by the middle layer.
Artificial neural network
Types of ANN
1. Feedforward ANN:-
• The flow of information takes place only in one direction.
• no feedback loops
• mostly used in supervised learning for instances such
as classification, image recognition etc.
• used in cases where the data is not sequential in nature.
1. Feedback ANN:-
• the feedback loops are a part of it.
• Such type of neural networks are mainly for memory
retention such as in the case of recurrent neural
networks.
• These types of networks are most suited for areas where
Artificial neural network
Back-Propagation
• Back-propagation is used to train the neural network of
the chain rule method.
• After each feed-forward passes through a network,
this algorithm does the backward pass to adjust the
model's parameters based on weights and biases.
• It is a process in which the internal parameters to the
network, the weighing factors W, and bias B, are
adjusted.
• The bias is an adjusting parameter, which reduces the
error in the system. Values of these parameters are
calculated using multiple-variable optimization
algorithms.
• the change that has to be made to the weighing factors
and bias is calculated using the derivative vector D and
the input data to that layer according to the following
rule:
Wnew = Wold + lrDvT
Bnew = Bold + lrD
where Ir is the learning rate.
Back-Propagation Pseudo-code
• Initialize the weights and offsets.
• Set all of them to low random values. Present inputs and
desired outputs. This is done by presenting a continuous-
valued input vector and specifying the desired outputs. If the
network is used as a classifier, all desired outputs are set to 1.
The input could be new on each turn or one could use a cyclic
pattern to train.
• Calculate the actual outputs using the sigmoidal non-linearity.
• Adapt weights using a recursive algorithm starting at the
output nodes and working back.
• Adjust the weights using the formula
Wij(t + 1) = Wij(t) +ηδjxt’
where Wij is the weight from node i to node j at time t, η is the
gain term, and δj is the error term for node j. If node j is an output
node, then
δ = y (1- y )(d - y )
where dj is the desired output of nodej and yj is the actual
output. If node j is an internal hidden node, then
where k is the number of overall nodes in the layers above
node j. If a momentum term α is added, the network
sometimes becomes faster and the weight changes are
smoothed by:-
• Repeat Step 2
• Stop
Network Training
1. Supervised learning :-
• An input stimulus is applied to the network, which results
in an output response.
• This is compared with the desired target response and
an error signal is generated.
• The learning in back-propagation networks is
supervised.
2. Unsupervised learning: -
• During training, the network receives different input
excitations and arbitrarily organizes the patterns into
categories.
• When a stimulus is later applied, the network indicates
the class to which it belongs and an entirely new class of
stimuli is generated.
3. Reinforced learning :-
• In this case, the network indicates whether the output is
matching with the target or not-a pass or fail indication.
In other words, the generated signal is binary. This kind
of learning is used in applications such as fault
diagnosis.
Modes of Training
• Pattern mode :- Consider a training set having N
patterns. The first pattern is presented to the network,
and the whole sequence of forward and backward
computations is performed, resulting in weight
adjustment. Then the second pattern is presented and
weights updated and so on until the Nth pattern.
• Batch mode:- Here, weight updating is done after the
presentation of one full epoch. One complete
presentation of the entire training set is called an epoch.

More Related Content

PDF
Neural Network Architectures
PDF
ELM: Extreme Learning Machine: Learning without iterative tuning
PPTX
Artificial neural networks
PPTX
03 Single layer Perception Classifier
PPT
PPT
2.5 backpropagation
PPT
Intro to Deep learning - Autoencoders
PPT
Multi-Layer Perceptrons
Neural Network Architectures
ELM: Extreme Learning Machine: Learning without iterative tuning
Artificial neural networks
03 Single layer Perception Classifier
2.5 backpropagation
Intro to Deep learning - Autoencoders
Multi-Layer Perceptrons

What's hot (20)

PDF
Machine Learning in Bioinformatics
PDF
PCA (Principal component analysis)
PPTX
Cluster Analysis
PDF
Classification and regression trees (cart)
PPTX
PDF
Bayesian inference
PPTX
Bioinformatics
PPTX
Cluster Validation
PDF
The jackknife and bootstrap
PPT
Hidden markov model ppt
PPTX
Data mining
PPT
Bayes Classification
PDF
Machine Learning Clustering
PDF
Descriptive m0deling
PPTX
Classification and Regression
PPT
Cluster analysis
PPT
Data preprocessing ng
PDF
Introduction to Machine Learning Classifiers
PPTX
Decision Trees
PPT
Cluster analysis
Machine Learning in Bioinformatics
PCA (Principal component analysis)
Cluster Analysis
Classification and regression trees (cart)
Bayesian inference
Bioinformatics
Cluster Validation
The jackknife and bootstrap
Hidden markov model ppt
Data mining
Bayes Classification
Machine Learning Clustering
Descriptive m0deling
Classification and Regression
Cluster analysis
Data preprocessing ng
Introduction to Machine Learning Classifiers
Decision Trees
Cluster analysis
Ad

Similar to Artificial neural network (20)

DOCX
Artificial neural networks seminar presentation using MSWord.
PPTX
Artificial neural network by arpit_sharma
PPT
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
PPTX
Basics of Artificial Neural Network
PDF
Mlp trainning algorithm
PPT
Artificial-Neural-Networks.ppt
PPTX
Artificial Neural Network ANN
PPTX
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
PPTX
Artificial Neural Networks ppt.pptx for final sem cse
PPTX
employed to cover the tampering traces of a tampered image. Image tampering
PPTX
NN and DL_Intro__ to Neural Network.pptx
PPTX
Artificial neural networks
PPTX
PDF
Chapter3 bp
PPT
Neural network final NWU 4.3 Graphics Course
PPTX
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
PPTX
Module 2 softcomputing.pptx
PPTX
Artificial neural network
PPT
backpropagation in neural networks
Artificial neural networks seminar presentation using MSWord.
Artificial neural network by arpit_sharma
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
Basics of Artificial Neural Network
Mlp trainning algorithm
Artificial-Neural-Networks.ppt
Artificial Neural Network ANN
تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية
Artificial Neural Networks ppt.pptx for final sem cse
employed to cover the tampering traces of a tampered image. Image tampering
NN and DL_Intro__ to Neural Network.pptx
Artificial neural networks
Chapter3 bp
Neural network final NWU 4.3 Graphics Course
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
Module 2 softcomputing.pptx
Artificial neural network
backpropagation in neural networks
Ad

More from IshaneeSharma (9)

PDF
ISA 75.01.01-2007 notes
DOCX
Why every control valve is a flow control valve?
DOCX
Adipic Acid Plant Energy Balance
DOCX
Material Balance of Adipic Acid Plant
PPTX
Rotary drilling rig (onshore)
PPTX
Use of biofilters for air pollution control
PPTX
Production of Dextran
PPTX
Social Ills that ail Indian Society: Child Labour
PDF
Applications of polymers in everyday life
ISA 75.01.01-2007 notes
Why every control valve is a flow control valve?
Adipic Acid Plant Energy Balance
Material Balance of Adipic Acid Plant
Rotary drilling rig (onshore)
Use of biofilters for air pollution control
Production of Dextran
Social Ills that ail Indian Society: Child Labour
Applications of polymers in everyday life

Recently uploaded (20)

PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
OOP with Java - Java Introduction (Basics)
PDF
737-MAX_SRG.pdf student reference guides
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPT
Project quality management in manufacturing
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
DOCX
573137875-Attendance-Management-System-original
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PPTX
web development for engineering and engineering
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Artificial Intelligence
PPTX
Safety Seminar civil to be ensured for safe working.
UNIT-1 - COAL BASED THERMAL POWER PLANTS
OOP with Java - Java Introduction (Basics)
737-MAX_SRG.pdf student reference guides
Fundamentals of safety and accident prevention -final (1).pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Project quality management in manufacturing
Embodied AI: Ushering in the Next Era of Intelligent Systems
573137875-Attendance-Management-System-original
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
web development for engineering and engineering
Foundation to blockchain - A guide to Blockchain Tech
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
bas. eng. economics group 4 presentation 1.pptx
Artificial Intelligence
Safety Seminar civil to be ensured for safe working.

Artificial neural network

  • 2. Introduction • Motivated by the possibility of creating an artificial computing network similar to the brain and nerve cells in our body. • These networks are computing systems composed of a number of highly interconnected layers of simple neuron- like processing elements. • The entire network collectively performs computations, with the knowledge represented as distributed patterns of activity all over processing elements. • The collective activities result in a high degree of parallelism, which enables the network to solve complex problems. • The distributed representation leads to greater fault tolerance and to graceful degradation when problems
  • 3. • They have the capability of simulating non-linear patterns. • Their advantage relies on the fact that they demand less time for development than traditional mathematical models.
  • 4. ANN Architecture There are three layers:- 1. Input Layer:- The first layer of an ANN that receives the input information in the form of various texts, numbers, audio files, image pixels, etc. 2. Hidden Layer:- In the middle of the ANN model are the hidden layers. There can be a single hidden layer or multiple hidden layers. These hidden layers perform various types of mathematical computation on the input data and recognize the patterns that are part of. 3. Output Layer:- In the output layer, we obtain the result that we obtain through rigorous computations performed by the middle layer.
  • 6. Types of ANN 1. Feedforward ANN:- • The flow of information takes place only in one direction. • no feedback loops • mostly used in supervised learning for instances such as classification, image recognition etc. • used in cases where the data is not sequential in nature. 1. Feedback ANN:- • the feedback loops are a part of it. • Such type of neural networks are mainly for memory retention such as in the case of recurrent neural networks. • These types of networks are most suited for areas where
  • 8. Back-Propagation • Back-propagation is used to train the neural network of the chain rule method. • After each feed-forward passes through a network, this algorithm does the backward pass to adjust the model's parameters based on weights and biases. • It is a process in which the internal parameters to the network, the weighing factors W, and bias B, are adjusted. • The bias is an adjusting parameter, which reduces the error in the system. Values of these parameters are calculated using multiple-variable optimization algorithms.
  • 9. • the change that has to be made to the weighing factors and bias is calculated using the derivative vector D and the input data to that layer according to the following rule: Wnew = Wold + lrDvT Bnew = Bold + lrD where Ir is the learning rate.
  • 10. Back-Propagation Pseudo-code • Initialize the weights and offsets. • Set all of them to low random values. Present inputs and desired outputs. This is done by presenting a continuous- valued input vector and specifying the desired outputs. If the network is used as a classifier, all desired outputs are set to 1. The input could be new on each turn or one could use a cyclic pattern to train. • Calculate the actual outputs using the sigmoidal non-linearity. • Adapt weights using a recursive algorithm starting at the output nodes and working back. • Adjust the weights using the formula Wij(t + 1) = Wij(t) +ηδjxt’ where Wij is the weight from node i to node j at time t, η is the gain term, and δj is the error term for node j. If node j is an output node, then δ = y (1- y )(d - y )
  • 11. where dj is the desired output of nodej and yj is the actual output. If node j is an internal hidden node, then where k is the number of overall nodes in the layers above node j. If a momentum term α is added, the network sometimes becomes faster and the weight changes are smoothed by:- • Repeat Step 2 • Stop
  • 12. Network Training 1. Supervised learning :- • An input stimulus is applied to the network, which results in an output response. • This is compared with the desired target response and an error signal is generated. • The learning in back-propagation networks is supervised. 2. Unsupervised learning: - • During training, the network receives different input excitations and arbitrarily organizes the patterns into categories. • When a stimulus is later applied, the network indicates the class to which it belongs and an entirely new class of stimuli is generated.
  • 13. 3. Reinforced learning :- • In this case, the network indicates whether the output is matching with the target or not-a pass or fail indication. In other words, the generated signal is binary. This kind of learning is used in applications such as fault diagnosis.
  • 14. Modes of Training • Pattern mode :- Consider a training set having N patterns. The first pattern is presented to the network, and the whole sequence of forward and backward computations is performed, resulting in weight adjustment. Then the second pattern is presented and weights updated and so on until the Nth pattern. • Batch mode:- Here, weight updating is done after the presentation of one full epoch. One complete presentation of the entire training set is called an epoch.