SlideShare a Scribd company logo
Artificial Neural Networks
Architectures
Perceptron Network
• Weights between
input & output
units are
adjusted.
• Weights between
sensory
associator units
are fixed.
• Goal of
Perceptron net is
to classify the
input pattern as a
member on not
a member to a
particular class.
X1
Xi
1
Xn
Y
X0
X1
Xi
Xn
y
b
W1
W2
Wn
Adaline Network
• Receives input from
several units and one
unit called bias.
• Inputs are +1 or -1,
weights have sign +
or -
• Net input calculated
is applied to
quantizer function to
restore output to +1
or -1
• Compares actual
Madaline Network
• Contains “n” units of
input layer,”m” units of
adaline layers, “1” unit
of Madaline Layer.
• Each neuron in the
Adaline and madaline
layer have a bias of
excitation 1.
• Adaline layer is present
between input and
output Madaline Layer.
• Used in
Communication
Systems , equilizers and
noise cancellation
devices.
Back Propagation Network
• A multilayer Feed
forward network
consisting of Input,
hidden and output
layers.
• Hidden and output
layers have biases
whose activation is 1.
• Signals are reversed
in learning phase.
• Inputs sent to BPN
and outputs
obtained could be
Auto Associative Memory Network
• Training input and
target output vectors
are same.
• Input layers consist of n
input units & output
layer consist of n
output units.
• Input and output units
are connected
through weighted
interconnections.
• Input and output
vectors are perfectly
correlated with each
other component by
Maxnet
• Symmetrical weights
are present over the
weighted
interconnections.
• Weights between
neurons are inhibitory
and fixed.
• The maxnet with this
structure can be
used as a subnet to
select a particular
node whose net
input is the largest.
X1 Xm
Xi Xj
1 1
1
−𝜀
−𝜀
−𝜀
−𝜀
−𝜀
−𝜀
1
Mexican Hat Net
• Neurons are arranged
in a linear order such
that positive
connections exist
between Xi and
neighborhood units &
negative between Xi
and far away units.
• Positive region is
Cooperation and
negative region is
Competition.
• Size of these regions
depend on the
magnitude that exist
between positive and
X i X
i+1
X
i+2
X
i+3
X
i-1
X
i-2
X
i-3
W3
W3
W2 W2
W1 W1
𝛿𝑖
W0

More Related Content

PPTX
Artificial neural network - Architectures
PPT
Artificial Neural Networks
PDF
Deep learning
PDF
Recurrent neural networks rnn
PPTX
Adaptive resonance theory (art)
PPTX
Introduction to Neural networks (under graduate course) Lecture 8 of 9
PPT
2.5 backpropagation
PPTX
Neural networks
Artificial neural network - Architectures
Artificial Neural Networks
Deep learning
Recurrent neural networks rnn
Adaptive resonance theory (art)
Introduction to Neural networks (under graduate course) Lecture 8 of 9
2.5 backpropagation
Neural networks

What's hot (20)

PPT
Artificial Intelligence: Artificial Neural Networks
PPTX
Basic Learning Algorithms of ANN
PDF
Classification by back propagation, multi layered feed forward neural network...
PPTX
Back propagation network
PPT
Adaptive Resonance Theory
PPTX
Neural net and back propagation
PPTX
04 Multi-layer Feedforward Networks
PPTX
PDF
Neural Networks
PPTX
Back propagation method
PDF
Introduction to Artificial Neural Networks
PPT
Adaline madaline
PPT
Counterpropagation NETWORK
PPT
lecture07.ppt
PPT
Classification using back propagation algorithm
PPTX
Counter propagation Network
PPTX
Art network
PPTX
Hopfield Networks
PPT
NEURAL NETWORK Widrow-Hoff Learning Adaline Hagan LMS
PPSX
Perceptron (neural network)
Artificial Intelligence: Artificial Neural Networks
Basic Learning Algorithms of ANN
Classification by back propagation, multi layered feed forward neural network...
Back propagation network
Adaptive Resonance Theory
Neural net and back propagation
04 Multi-layer Feedforward Networks
Neural Networks
Back propagation method
Introduction to Artificial Neural Networks
Adaline madaline
Counterpropagation NETWORK
lecture07.ppt
Classification using back propagation algorithm
Counter propagation Network
Art network
Hopfield Networks
NEURAL NETWORK Widrow-Hoff Learning Adaline Hagan LMS
Perceptron (neural network)
Ad

Similar to Artificial neural network architectures (20)

PPTX
UNIT-3 .PPTX
PDF
20200428135045cfbc718e2c.pdf
PDF
Nural Network ppt presentation which help about nural
PPT
Neural networks,Single Layer Feed Forward
PPTX
Module 2 softcomputing.pptx
PDF
Artificial Neural Networks: Introduction, Neural Network representation, Appr...
PPTX
Artificial Neural Network
PDF
Machine Learning using python module_3_ppt.pdf
PPTX
Machine Learning - Neural Networks - Perceptron
PPTX
Machine Learning - Introduction to Neural Networks
PPT
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI300
PPTX
Neural network
PPTX
PPT
Artificial-Neural-Networks.ppt
PPTX
NEURAL NETWORK IN MACHINE LEARNING FOR STUDENTS
PPTX
MODULE-3_intelligent control_Module_3_KTU
PPT
neural.ppt
PPT
introduction to feed neural networks.ppt
PPT
neural.ppt
PPT
neural.ppt
UNIT-3 .PPTX
20200428135045cfbc718e2c.pdf
Nural Network ppt presentation which help about nural
Neural networks,Single Layer Feed Forward
Module 2 softcomputing.pptx
Artificial Neural Networks: Introduction, Neural Network representation, Appr...
Artificial Neural Network
Machine Learning using python module_3_ppt.pdf
Machine Learning - Neural Networks - Perceptron
Machine Learning - Introduction to Neural Networks
SET-02_SOCS_ESE-DEC23__B.Tech%20(CSE-H+NH)-AIML_5_CSAI300
Neural network
Artificial-Neural-Networks.ppt
NEURAL NETWORK IN MACHINE LEARNING FOR STUDENTS
MODULE-3_intelligent control_Module_3_KTU
neural.ppt
introduction to feed neural networks.ppt
neural.ppt
neural.ppt
Ad

Recently uploaded (20)

PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation theory and applications.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Approach and Philosophy of On baking technology
PDF
Empathic Computing: Creating Shared Understanding
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
Cloud computing and distributed systems.
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Big Data Technologies - Introduction.pptx
PPT
Teaching material agriculture food technology
Digital-Transformation-Roadmap-for-Companies.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Unlocking AI with Model Context Protocol (MCP)
Mobile App Security Testing_ A Comprehensive Guide.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation theory and applications.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Approach and Philosophy of On baking technology
Empathic Computing: Creating Shared Understanding
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
MIND Revenue Release Quarter 2 2025 Press Release
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Programs and apps: productivity, graphics, security and other tools
Cloud computing and distributed systems.
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Big Data Technologies - Introduction.pptx
Teaching material agriculture food technology

Artificial neural network architectures

  • 2. Perceptron Network • Weights between input & output units are adjusted. • Weights between sensory associator units are fixed. • Goal of Perceptron net is to classify the input pattern as a member on not a member to a particular class. X1 Xi 1 Xn Y X0 X1 Xi Xn y b W1 W2 Wn
  • 3. Adaline Network • Receives input from several units and one unit called bias. • Inputs are +1 or -1, weights have sign + or - • Net input calculated is applied to quantizer function to restore output to +1 or -1 • Compares actual
  • 4. Madaline Network • Contains “n” units of input layer,”m” units of adaline layers, “1” unit of Madaline Layer. • Each neuron in the Adaline and madaline layer have a bias of excitation 1. • Adaline layer is present between input and output Madaline Layer. • Used in Communication Systems , equilizers and noise cancellation devices.
  • 5. Back Propagation Network • A multilayer Feed forward network consisting of Input, hidden and output layers. • Hidden and output layers have biases whose activation is 1. • Signals are reversed in learning phase. • Inputs sent to BPN and outputs obtained could be
  • 6. Auto Associative Memory Network • Training input and target output vectors are same. • Input layers consist of n input units & output layer consist of n output units. • Input and output units are connected through weighted interconnections. • Input and output vectors are perfectly correlated with each other component by
  • 7. Maxnet • Symmetrical weights are present over the weighted interconnections. • Weights between neurons are inhibitory and fixed. • The maxnet with this structure can be used as a subnet to select a particular node whose net input is the largest. X1 Xm Xi Xj 1 1 1 −𝜀 −𝜀 −𝜀 −𝜀 −𝜀 −𝜀 1
  • 8. Mexican Hat Net • Neurons are arranged in a linear order such that positive connections exist between Xi and neighborhood units & negative between Xi and far away units. • Positive region is Cooperation and negative region is Competition. • Size of these regions depend on the magnitude that exist between positive and X i X i+1 X i+2 X i+3 X i-1 X i-2 X i-3 W3 W3 W2 W2 W1 W1 𝛿𝑖 W0

Editor's Notes

  • #3: Learning signal is the difference between the desired and actual response of a neuron. The perceptron learning rule is Consider a finite “n” number of input training vectors Associated target (desired) values x(n) and t(n) where n is from 1 to N Target is either +1 or -1 The output “y” is obtained on the basis of the net input calculated and activation function being applied over the net input