SlideShare a Scribd company logo
Hopfiled
Neural
Network
Presentation by: Paria Pirkandy – Zahra
Mojtahedin
Electrical Engineering Department
Sharif University of Technology
Table of contents
01
04
What is Hopfield
Network
06
Mathematical
model of HN’s
Some interesting
facts
Learning HNs through
examples
02
Next
Back
Major
Applications
03
Introduction
05
Next
Back
01
Introduction
Memory in Humans
• Human brain can lay down and recall of memories in both long-term and short-
term fashions
• Associative or content-addressable
• Memory is not isolated - All memories are, in some sense, strings of memories
• We access the memory by its content – not by its location or the neural pathways
• Compare to the traditional computer memory
• Given incomplete or low resolution or partial information, the capability of
reconstruction
Next
Back
Associative Network Types
• Auto Associative: X = Y
*Recognize noisy versions of a pattern
• Hetero Associative: X <> Y
*Iterative correction of input and output
Next
Back
Associative Network Types
Learning the alphabet in class
Next
Back
• Hetero Associative: X <>
Y
Pattern
Association
• Auto Associative: X = Y
After coding patterns in the weights of this Network :
 Is this network capable to perfect recall the same stored patterns, or not ?
 Is there any capacity limitation for perfect recall ?
 Is this network capable to reconstruct the impaired patterns, or not ?
Hetero Associative Memory:
 The output vector dimension is usually less
than input vector dimension (Pattern
Association)
 The Hebb rule is used as a learning
algorithm to calculate the weight matrix
by summing the outer products of each input-
output pair.
Next
Back
 The inputs and output vectors a the same.
 The Hebb rule is used as a learning algorithm to calculate the weight matrix by summing
the other products of each input-output pair.
After coding patterns in the weights of this Network :
 Is this network capable to perfect recall the same stored patterns, or not?
 Is there any capacity limitation for perfect recall ?
 Is this network capable to reconstruct the impaired patterns, or not ?
Auto Associative Memory:
Next
Back
What is Hopfield
Network
02
Next
Back
 John J. Hopfield is renowned for his
pioneering work in theoretical
physics, particularly in introducing
the concept of "Hopfield networks,"
which are foundational to modern
neural networks and artificial
intelligence. His research bridges
physics, biology, and computational
science, offering profound insights
into how complex systems process
information.
Next
Back
What is Hopfield Network
• According to the article, Hopfield net is a form of recurrent artificial
neural network invented by John Hopfield. Hopfield nets serve as
content-addressable memory systems with binary threshold units.
They are guaranteed to converge to a local minimum, but convergence
to one of the stored patterns is not guaranteed.
Next
Back
S
w1
w2
wd
x1
x2
xd
1
y
-b
……
What are HN (informally)
• These are single-layered
recurrent networks.
• All the neurons in the network
are feedback from all other
neurons in the network.
• The states of a neuron are either
+1 or -1 instead of (1 and 0) in
order to work correctly.
• The number of input nodes
should always be equal to the
number of output nodes.
• The following figure shows a
Hopfield network with four
nodes.
Next
Back
 Is able to store certain patterns in a similar fashion as human brain
 Given partial information, the full pattern can be recovered
 Robustness
 Guarantee of convergence
o during an average lifetime many neurons will die but we do not suffer a
catastrophic loss of individual memories (by the time we die we may have
lost 20 percent of our original neurons).
o We are guaranteed that the pattern will settle down after a long enough time
to some fixed pattern.
o In the language of memory recall, if we start the network off with a pattern
of firing which approximates one of the "stable firing patterns" (memories) it
will "under its own steam" end up in the nearby well in the energy surface
thereby recalling the original perfect memory.
Next
Back
Images are from http://www2.psy.uq.edu.au/~brainwav/Manual/Hopfield.html
Next
Back
How Does It Work?
A set of exemplar patterns are chosen and used to initialize the weights
of the network.
Once this is done, any pattern can be presented to the network, which
will respond by displaying the exemplar pattern that is in some sense
similar to the input pattern.
The output pattern can be read off from the network by reading the
states of the units in the order determined by the mapping of the
components of the input vector to the units.
Next
Back
Hopfield Neural Network
Four Components
How to train the network?
How to update a node?
What sequence should use when updating nodes?
How to stop?
Next
Back
Network Initialization
• The network has N units (nodes)
• The weight from node i to node j is wij
• wij = wji
• Each node has a threshold / bias value associated with it, bi
• We have M known patterns pi = (pi
1
,…,pi
N
), i =1..M, each of which has N
elements
Next
Back
Classification
• Suppose we have an input pattern (p1
, …, pN
) to be classified
• Suppose the state of the ith node is mi(t)
• Then:
o mi(0) = pi
o Testing (S is the sigmoid function)
Next
Back
Why Converge?
Billiard table model
Surface of billiard table -> energy surface
Energy of the network
The choice of the network weights ensures that minima of the energy function occur at
(or near) points representing exemplar patterns
Next
Back
Next
Back
The global energy is the sum of many contributions. Each contribution
depends on one connection weight and the binary states of two
neurons:
This simple quadratic energy function makes it possible for each unit to
compute locally how it’s state affects the global energy:
The energy function
Next
Back
Hopfield Neural Network
Capacity of Network
 Let us examine the stability of a particular pattern i

. The stability
condition generalizes to:
 Where the net input hi

to unit i in pattern  is:
 Now we separate the sum on  into the special term = and all the
rest:

 

j
j
j
i
j
j
ij
i
N
w
h










1




j
j
j
i
i
i
N
h











1
) ( ) sgn(
i all for h
i i
 


Next
Back
• If the second term were zero, we could immediately conclude that pattern  was stable
according to the previous stability condition. This is still true if the second term is small
enough: if its magnitude is smaller that 1 it cannot change the sigh of hi
 and the stability
condition will be still satisfied.
• The second term is called crosstalk. It turns out that it is less than 1 in many cases of
interest if p is small enough.
• Consider the quantity:
If Ci

is negative the crosstalk term has the same sign as the desired i

and does no
harm. But if its is positive and larger than 1, it changes the sign of hi

and makes the bit i
of pattern  unstable.
• The Ci

depends on the patterns we try to store. For random patterns and with equal
probability for the values +1 and –1 we can estimate the probability Perror that any
chosen bit is unstable:
Perror=Prob(Ci

> 1)




j
j
j
i
i
i
N
C











1
Next
Back
• Clearly Perror increases as we increase the number p of patterns. Choosing a criterion
for acceptable performance (e.g. Perror <0.01) we can try to determine the storage
capacity of the network: the maximum number of patterns that can stored without
unacceptable errors.
• To calculate Perror we observe that Ci

behaves like a binomial distribution with zero
mean and variance 2
= p/N, where p and N are assumed much larger than 1. For
large values of N*p, we can approximate this distribution with a Gaussian distribution
of the same mean and variance:
Where the error function erf(x) is defined by:
















 


)
2
(
1
2
1
)
2
1
(
1
2
1
2
1
2
1
2
/ 2
2
p
N
erf
erf
dx
e
P x
error






x
u
du
e
x
erf
0
2
2
)
(

Next
Back
• This table shows values of p/N required to obtain various values for Perror:
• This calculation tells us only about the initial stability of the patterns. If we choose p
<0.185N, it tells us that no more than 1% of the pattern bits will be unstable initially.
But if start the system in a particular pattern i

and about 1% of the bits flip, what
happens next? It may be that the first few flips will cause more bits to flip. In the worst
case we will have an avalanche phenomenon. So, our estimates of pmax are really upper
bounds. We may need smaller values of p to keep the final attractors close to the
desired patterns.
Perror pmax / N
0.001 0.105
0.0036 0.138
0.01 0.185
0.05 0.37
0.1 0.61
Next
Back
 The Network capacity of the Hopfield network model is determined by neuron amounts
and connections within a given network. Therefore, the number of memories that are
able to be stored is dependent on neurons and connections:
 Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138
(approximately 138 vectors can be recalled from storage for every 1000 nodes)
 Therefore, it is evident that many mistakes will occur if one tries to store a large number
of vectors. When the Hopfield model does not recall the right pattern, it is possible that
an intrusion has taken place, since semantically related items tend to confuse the
individual, and recollection of the wrong pattern occurs.
 Therefore, the Hopfield network model is shown to confuse one stored item with that of
another upon retrieval.;
Next
Back
Some interesting
facts
03
Next
Back
Some interesting
facts
Some Interesting Facts
Next
Back
● The recall pattern of the Hopfield network is similar to our recall mechanism.
● Both are based on content-addressable memory.
● If some of the neurons of the network are destroyed, the performance is degraded,
but some network capabilities may be retained even with major network damage.
Just like our brains.
Did you know
that we are
similar?
Next
Back
04
Major
Applications
Major Applications
Next
Back
•Recalling or reconstructing corrupted patterns.
•Large-scale computational intelligence systems.
•Handwriting recognition software.
•Practical applications of HNs are limited because the number of training patterns
can be at most about 14% of the number of nodes in the network.
•If the network is overloaded (trained with more than the maximum acceptable
number of attractors), then it won't converge to clearly defined attractors.
Hopfield Network in Optimization
The idea of using the Hopfield network in optimization problems is straightforward: If a
constrained/unconstrained cost function can be written in the form of the Hopfield
energy function E, then there exists a Hopfield network whose equilibrium points
represent solutions to the constrained/unconstrained optimization problem. Minimizing
the Hopfield energy function both minimizes the objective function and satisfies the
constraints also as the constraints are "embedded" into the synaptic weights of the
network. Although including the optimization constraints into the synaptic weights in the
best possible way is a challenging task, many difficult optimization problems with
constraints in different disciplines have been converted to the Hopfield energy
function
Next
Back
Mathematical
model of HN’s
05
Next
Back
Mathematical Modeling of HN’s
Next
Back
Mathematical Modeling of HN’s
Next
Back
• Activation Function: Consider the signum function as the neuron's
activation function.
• i.e.,
Mathematical Modeling of HN’s
Next
Back
Liapunov Energy Function:
Power of Hopfield Networks
Next
Back
We want to
understand how to
achieve this kind of
performance from
simple Hopfield
networks.
Next
Back
06
Learning HNs
through examples
Learning HNs through simple example
Next
Back
There are various ways to
train these kinds of
networks like
backpropagation
algorithm, recurrent
learning algorithm, genetic
algorithm, but there is one
very simple algorithm to
train these simple
networks called 'One shot
method.’
We will be using this
algorithm in order to train
the network.
Pattern 1:
Pattern 2:
Pattern 3:
Next
Back
Learning HNs through simple example
Let's train this network for the following patterns:
Next
Back
Learning HNs through example
• Moving onto a little more complex problem described in Haykin’s Neural
Network Book.
• The book used N=120 neurons and trained the network with 120-pixel
images, where each pixel was represented by one neuron.
• The following 8 patterns were used to train the neural network:
Next
Back
Learning HNs through example
• In order to recognize the power of HNs.
• For this, they needed a corrupted image. They flipped the value of each
pixel with p=0.25
• Using these corrupted images, the trained HN was run. And after a
certain number of iterations, the output images converged to one of the
learned patterns.
• The next slides show the results that they obtained.
Learning HNs through example
Next
Back
Next
Back
Learning HNs through example
Next
Back
Flow Chart summarizing overall
process Train HN using Standard patterns
Update weight vectors of Network
Run the trained network with corrupted pattern
Network returns the decrypted pattern
Shortcomings of HNs
Next
Back
• Training patterns can be at most about 14% of the number
of nodes in the network.
• If more patterns are used, then:
* The stored patterns become unstable.
* Spurious stable states appear (i.e., stable states which do not correspond
with stored a patterns).
• Can sometimes misinterpret the corrupted pattern.
Shortcomings of HNs
Next
Back
References
Next
Back
 Zurada: Introduction to Artificial Neural Systems
 Haykins: Neural Networks, A Comprehensive Foundation
 J. J. Hopfield, "Neural networks and physical systems with emergent collective
computational abilities", 1982
 R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996
 http://en.wikipedia.org/wiki/Hopfield_networks
Thanks for
Attention
Back

More Related Content

PPTX
sathiya new final.pptx
PPTX
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
PPTX
Mathematical Foundation of Discrete time Hopfield Networks
PPTX
Unit iii update
PPTX
Hopfield Networks
PPT
0321204662_lec07_2.pptbklllllmbklgbkhnjfv
PDF
Associative Learning Artificial Intelligence
PDF
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
sathiya new final.pptx
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
Mathematical Foundation of Discrete time Hopfield Networks
Unit iii update
Hopfield Networks
0321204662_lec07_2.pptbklllllmbklgbkhnjfv
Associative Learning Artificial Intelligence
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...

Similar to Hopfield Neural Network (20)

PPTX
Using Hopfield Networks for Solving TSP
PDF
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
PPTX
Topic 3.NN and DL Hopfield Networks.pptx
PPTX
NN12345671234567890-9876543234567(Ass-4).pptx
PPT
L005.neural networks
PDF
Artificial neural networks
PPTX
PPT 3.1.3 Hop-field Network.pptx
PPTX
Neural networks
PPTX
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
PDF
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
PPT
L005.neural networks
PPT
Neural networks
PDF
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...
PPTX
Advanced applications of artificial intelligence and neural networks
PPT
Principles of soft computing-Associative memory networks
PDF
Learning in Networks: were Pavlov and Hebb right?
PDF
Evolving Connection Weights for Pattern Storage and Recall in Hopfield Model ...
PDF
International Refereed Journal of Engineering and Science (IRJES)
PPT
ch11.pptKGYUTFYDRERLJIOUY7T867RVHOJIP09-IU08Y7GTFGYU890-I90UIYGUI
PPT
ch11.ppt kusrdsdagrfzgfdfgdfsdsfdsxgdhfjgh50s
Using Hopfield Networks for Solving TSP
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Topic 3.NN and DL Hopfield Networks.pptx
NN12345671234567890-9876543234567(Ass-4).pptx
L005.neural networks
Artificial neural networks
PPT 3.1.3 Hop-field Network.pptx
Neural networks
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
2013-1 Machine Learning Lecture 04 - Torsten Reil - Artificial Neural Netw…
L005.neural networks
Neural networks
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...
Advanced applications of artificial intelligence and neural networks
Principles of soft computing-Associative memory networks
Learning in Networks: were Pavlov and Hebb right?
Evolving Connection Weights for Pattern Storage and Recall in Hopfield Model ...
International Refereed Journal of Engineering and Science (IRJES)
ch11.pptKGYUTFYDRERLJIOUY7T867RVHOJIP09-IU08Y7GTFGYU890-I90UIYGUI
ch11.ppt kusrdsdagrfzgfdfgdfsdsfdsxgdhfjgh50s
Ad

More from zahramojtahediin (11)

PPTX
A Two-Stage Process Model of Sensory Discrimination An Alternative to Drift-D...
PDF
Dynamic reshaping of functional brain networks during visual object recognition
PDF
Complex brain networks: graph theoretical analysis of structural and functi...
PPTX
Optimized Greedy Algorithm (OPA) for Graph Theory
PPTX
Robust Object Recognition With Cortex-Like Mechanisms
PPTX
Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
PPTX
Phantom Limb Pain (PLP)
PPTX
Fuzzy Clustering & Fuzzy Classification Method
PPTX
Feature Selections Methods
PPTX
Sugeno Yasukawa, Takagi-Sugeno, and ANFIS-Method
PPTX
Stem Cells and their applications for the treatment of injuries to the Centra...
A Two-Stage Process Model of Sensory Discrimination An Alternative to Drift-D...
Dynamic reshaping of functional brain networks during visual object recognition
Complex brain networks: graph theoretical analysis of structural and functi...
Optimized Greedy Algorithm (OPA) for Graph Theory
Robust Object Recognition With Cortex-Like Mechanisms
Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
Phantom Limb Pain (PLP)
Fuzzy Clustering & Fuzzy Classification Method
Feature Selections Methods
Sugeno Yasukawa, Takagi-Sugeno, and ANFIS-Method
Stem Cells and their applications for the treatment of injuries to the Centra...
Ad

Recently uploaded (20)

PDF
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
PDF
Sciences of Europe No 170 (2025)
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
PDF
Phytochemical Investigation of Miliusa longipes.pdf
PPTX
2Systematics of Living Organisms t-.pptx
PDF
. Radiology Case Scenariosssssssssssssss
PPTX
Derivatives of integument scales, beaks, horns,.pptx
PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PDF
HPLC-PPT.docx high performance liquid chromatography
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PDF
An interstellar mission to test astrophysical black holes
PPTX
microscope-Lecturecjchchchchcuvuvhc.pptx
PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PPTX
Cell Membrane: Structure, Composition & Functions
PPTX
2. Earth - The Living Planet earth and life
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
Sciences of Europe No 170 (2025)
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
Phytochemical Investigation of Miliusa longipes.pdf
2Systematics of Living Organisms t-.pptx
. Radiology Case Scenariosssssssssssssss
Derivatives of integument scales, beaks, horns,.pptx
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
HPLC-PPT.docx high performance liquid chromatography
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
An interstellar mission to test astrophysical black holes
microscope-Lecturecjchchchchcuvuvhc.pptx
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
Cell Membrane: Structure, Composition & Functions
2. Earth - The Living Planet earth and life

Hopfield Neural Network

  • 1. Hopfiled Neural Network Presentation by: Paria Pirkandy – Zahra Mojtahedin Electrical Engineering Department Sharif University of Technology
  • 2. Table of contents 01 04 What is Hopfield Network 06 Mathematical model of HN’s Some interesting facts Learning HNs through examples 02 Next Back Major Applications 03 Introduction 05
  • 4. Memory in Humans • Human brain can lay down and recall of memories in both long-term and short- term fashions • Associative or content-addressable • Memory is not isolated - All memories are, in some sense, strings of memories • We access the memory by its content – not by its location or the neural pathways • Compare to the traditional computer memory • Given incomplete or low resolution or partial information, the capability of reconstruction Next Back
  • 5. Associative Network Types • Auto Associative: X = Y *Recognize noisy versions of a pattern • Hetero Associative: X <> Y *Iterative correction of input and output Next Back
  • 6. Associative Network Types Learning the alphabet in class Next Back • Hetero Associative: X <> Y Pattern Association • Auto Associative: X = Y
  • 7. After coding patterns in the weights of this Network :  Is this network capable to perfect recall the same stored patterns, or not ?  Is there any capacity limitation for perfect recall ?  Is this network capable to reconstruct the impaired patterns, or not ? Hetero Associative Memory:  The output vector dimension is usually less than input vector dimension (Pattern Association)  The Hebb rule is used as a learning algorithm to calculate the weight matrix by summing the outer products of each input- output pair. Next Back
  • 8.  The inputs and output vectors a the same.  The Hebb rule is used as a learning algorithm to calculate the weight matrix by summing the other products of each input-output pair. After coding patterns in the weights of this Network :  Is this network capable to perfect recall the same stored patterns, or not?  Is there any capacity limitation for perfect recall ?  Is this network capable to reconstruct the impaired patterns, or not ? Auto Associative Memory: Next Back
  • 10.  John J. Hopfield is renowned for his pioneering work in theoretical physics, particularly in introducing the concept of "Hopfield networks," which are foundational to modern neural networks and artificial intelligence. His research bridges physics, biology, and computational science, offering profound insights into how complex systems process information. Next Back
  • 11. What is Hopfield Network • According to the article, Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems with binary threshold units. They are guaranteed to converge to a local minimum, but convergence to one of the stored patterns is not guaranteed. Next Back S w1 w2 wd x1 x2 xd 1 y -b ……
  • 12. What are HN (informally) • These are single-layered recurrent networks. • All the neurons in the network are feedback from all other neurons in the network. • The states of a neuron are either +1 or -1 instead of (1 and 0) in order to work correctly. • The number of input nodes should always be equal to the number of output nodes. • The following figure shows a Hopfield network with four nodes. Next Back
  • 13.  Is able to store certain patterns in a similar fashion as human brain  Given partial information, the full pattern can be recovered  Robustness  Guarantee of convergence o during an average lifetime many neurons will die but we do not suffer a catastrophic loss of individual memories (by the time we die we may have lost 20 percent of our original neurons). o We are guaranteed that the pattern will settle down after a long enough time to some fixed pattern. o In the language of memory recall, if we start the network off with a pattern of firing which approximates one of the "stable firing patterns" (memories) it will "under its own steam" end up in the nearby well in the energy surface thereby recalling the original perfect memory. Next Back
  • 14. Images are from http://www2.psy.uq.edu.au/~brainwav/Manual/Hopfield.html Next Back
  • 15. How Does It Work? A set of exemplar patterns are chosen and used to initialize the weights of the network. Once this is done, any pattern can be presented to the network, which will respond by displaying the exemplar pattern that is in some sense similar to the input pattern. The output pattern can be read off from the network by reading the states of the units in the order determined by the mapping of the components of the input vector to the units. Next Back
  • 17. Four Components How to train the network? How to update a node? What sequence should use when updating nodes? How to stop? Next Back
  • 18. Network Initialization • The network has N units (nodes) • The weight from node i to node j is wij • wij = wji • Each node has a threshold / bias value associated with it, bi • We have M known patterns pi = (pi 1 ,…,pi N ), i =1..M, each of which has N elements Next Back
  • 19. Classification • Suppose we have an input pattern (p1 , …, pN ) to be classified • Suppose the state of the ith node is mi(t) • Then: o mi(0) = pi o Testing (S is the sigmoid function) Next Back
  • 20. Why Converge? Billiard table model Surface of billiard table -> energy surface Energy of the network The choice of the network weights ensures that minima of the energy function occur at (or near) points representing exemplar patterns Next Back
  • 22. The global energy is the sum of many contributions. Each contribution depends on one connection weight and the binary states of two neurons: This simple quadratic energy function makes it possible for each unit to compute locally how it’s state affects the global energy: The energy function Next Back
  • 24. Capacity of Network  Let us examine the stability of a particular pattern i  . The stability condition generalizes to:  Where the net input hi  to unit i in pattern  is:  Now we separate the sum on  into the special term = and all the rest:     j j j i j j ij i N w h           1     j j j i i i N h            1 ) ( ) sgn( i all for h i i     Next Back
  • 25. • If the second term were zero, we could immediately conclude that pattern  was stable according to the previous stability condition. This is still true if the second term is small enough: if its magnitude is smaller that 1 it cannot change the sigh of hi  and the stability condition will be still satisfied. • The second term is called crosstalk. It turns out that it is less than 1 in many cases of interest if p is small enough. • Consider the quantity: If Ci  is negative the crosstalk term has the same sign as the desired i  and does no harm. But if its is positive and larger than 1, it changes the sign of hi  and makes the bit i of pattern  unstable. • The Ci  depends on the patterns we try to store. For random patterns and with equal probability for the values +1 and –1 we can estimate the probability Perror that any chosen bit is unstable: Perror=Prob(Ci  > 1)     j j j i i i N C            1 Next Back
  • 26. • Clearly Perror increases as we increase the number p of patterns. Choosing a criterion for acceptable performance (e.g. Perror <0.01) we can try to determine the storage capacity of the network: the maximum number of patterns that can stored without unacceptable errors. • To calculate Perror we observe that Ci  behaves like a binomial distribution with zero mean and variance 2 = p/N, where p and N are assumed much larger than 1. For large values of N*p, we can approximate this distribution with a Gaussian distribution of the same mean and variance: Where the error function erf(x) is defined by:                     ) 2 ( 1 2 1 ) 2 1 ( 1 2 1 2 1 2 1 2 / 2 2 p N erf erf dx e P x error       x u du e x erf 0 2 2 ) (  Next Back
  • 27. • This table shows values of p/N required to obtain various values for Perror: • This calculation tells us only about the initial stability of the patterns. If we choose p <0.185N, it tells us that no more than 1% of the pattern bits will be unstable initially. But if start the system in a particular pattern i  and about 1% of the bits flip, what happens next? It may be that the first few flips will cause more bits to flip. In the worst case we will have an avalanche phenomenon. So, our estimates of pmax are really upper bounds. We may need smaller values of p to keep the final attractors close to the desired patterns. Perror pmax / N 0.001 0.105 0.0036 0.138 0.01 0.185 0.05 0.37 0.1 0.61 Next Back
  • 28.  The Network capacity of the Hopfield network model is determined by neuron amounts and connections within a given network. Therefore, the number of memories that are able to be stored is dependent on neurons and connections:  Furthermore, it was shown that the recall accuracy between vectors and nodes was 0.138 (approximately 138 vectors can be recalled from storage for every 1000 nodes)  Therefore, it is evident that many mistakes will occur if one tries to store a large number of vectors. When the Hopfield model does not recall the right pattern, it is possible that an intrusion has taken place, since semantically related items tend to confuse the individual, and recollection of the wrong pattern occurs.  Therefore, the Hopfield network model is shown to confuse one stored item with that of another upon retrieval.; Next Back
  • 30. Some Interesting Facts Next Back ● The recall pattern of the Hopfield network is similar to our recall mechanism. ● Both are based on content-addressable memory. ● If some of the neurons of the network are destroyed, the performance is degraded, but some network capabilities may be retained even with major network damage. Just like our brains. Did you know that we are similar?
  • 32. Major Applications Next Back •Recalling or reconstructing corrupted patterns. •Large-scale computational intelligence systems. •Handwriting recognition software. •Practical applications of HNs are limited because the number of training patterns can be at most about 14% of the number of nodes in the network. •If the network is overloaded (trained with more than the maximum acceptable number of attractors), then it won't converge to clearly defined attractors.
  • 33. Hopfield Network in Optimization The idea of using the Hopfield network in optimization problems is straightforward: If a constrained/unconstrained cost function can be written in the form of the Hopfield energy function E, then there exists a Hopfield network whose equilibrium points represent solutions to the constrained/unconstrained optimization problem. Minimizing the Hopfield energy function both minimizes the objective function and satisfies the constraints also as the constraints are "embedded" into the synaptic weights of the network. Although including the optimization constraints into the synaptic weights in the best possible way is a challenging task, many difficult optimization problems with constraints in different disciplines have been converted to the Hopfield energy function Next Back
  • 35. Mathematical Modeling of HN’s Next Back
  • 36. Mathematical Modeling of HN’s Next Back • Activation Function: Consider the signum function as the neuron's activation function. • i.e.,
  • 37. Mathematical Modeling of HN’s Next Back Liapunov Energy Function:
  • 38. Power of Hopfield Networks Next Back We want to understand how to achieve this kind of performance from simple Hopfield networks.
  • 40. Learning HNs through simple example Next Back There are various ways to train these kinds of networks like backpropagation algorithm, recurrent learning algorithm, genetic algorithm, but there is one very simple algorithm to train these simple networks called 'One shot method.’ We will be using this algorithm in order to train the network.
  • 41. Pattern 1: Pattern 2: Pattern 3: Next Back Learning HNs through simple example Let's train this network for the following patterns:
  • 42. Next Back Learning HNs through example • Moving onto a little more complex problem described in Haykin’s Neural Network Book. • The book used N=120 neurons and trained the network with 120-pixel images, where each pixel was represented by one neuron. • The following 8 patterns were used to train the neural network:
  • 43. Next Back Learning HNs through example • In order to recognize the power of HNs. • For this, they needed a corrupted image. They flipped the value of each pixel with p=0.25 • Using these corrupted images, the trained HN was run. And after a certain number of iterations, the output images converged to one of the learned patterns. • The next slides show the results that they obtained.
  • 44. Learning HNs through example Next Back
  • 46. Next Back Flow Chart summarizing overall process Train HN using Standard patterns Update weight vectors of Network Run the trained network with corrupted pattern Network returns the decrypted pattern
  • 47. Shortcomings of HNs Next Back • Training patterns can be at most about 14% of the number of nodes in the network. • If more patterns are used, then: * The stored patterns become unstable. * Spurious stable states appear (i.e., stable states which do not correspond with stored a patterns). • Can sometimes misinterpret the corrupted pattern.
  • 49. References Next Back  Zurada: Introduction to Artificial Neural Systems  Haykins: Neural Networks, A Comprehensive Foundation  J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities", 1982  R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996  http://en.wikipedia.org/wiki/Hopfield_networks