SlideShare a Scribd company logo
Graded Patterns in Attractor Networks
  Tristan Webb              Supervisor: Jianfeng Feng                          Co-Supervisor: Edmund Rolls
  Complexity Science DTC, Computational Biology Research Group
  University of Warwick

   Summary
 We demonstrate how noise can exist in a neural network as large as the brain. Graded firing patterns allow us to tune noise levels in the
 engineering of neural networks. The levels of noise in the brain may change with age and play a functional role in the retrieval of memory.

   Attractor Neural Networks                                                                                                       Graded Patterns
 Neural coding, and its relationship to behavior, is heavily researched                                     The network was simulated numerically for a time period of four sec-
 in many areas of neuroscience. Attractor networks are a demonstra-                                         onds. We present the network two different periods of different exter-
 tion of how decisions, memories, and other cognitive representations                                       nal stimulus levels: first a base period, and later a cue period. During
 can be encoded in a firing pattern (or set of active neurons) in a neu-                                     the cue period the qualitative firing pattern in the network is sporadic
 ral network.                                                                                               and uneven. When cues are applied, the firing rate for the neurons in
 An attractor network receives sensory information to the network                                           a winning decision pool is raised to through positive feedback, while
 through connections known as synapses. The network is character-                                           the other pool is suppressed through increased inhibition.
 ized by recurrent collateral synapses providing feedback to neurons.                                                                                                          Uniform                                                                               Graded
 Recurrent synaptic activity will cause the firing patterns in the network                                                                                              Final Second Mean Neuron Rates                                                        Final Second Mean Neuron Rates
                                                                                                                                                      60                                                                                      60
 to persist even after the input is removed.                                                                                                                                                   Winning Pool                                                                       Winning Pool
 Learning occurs through the modification of synaptic strengths (wij ,                                                                                 50                                       Losing Pool                                    50                                  Losing Pool
 where i is the ith neuron and j is the jth synapse). An associative                                                                                  40                                                                                      40




                                                                                                                                   Firing Rate (Hz)




                                                                                                                                                                                                                           Firing Rate (Hz)
 learning (Hebbian) rule can create the correct structure for the re-                                                                                 30                                                                                      30
 call of information. This type of learning strengthens connections                                                                                   20                                                                                      20
 between neurons that are simultaneously active.
                                                                                                                                                      10                                                                                      10
 The network dynamics can be thought of as a gradient descent to-
 wards a local minimum in an energy landscape. When the network                                                                                        00        5       10   15 20 25                  30   35   40                           00        5     10   15 20 25          30      35   40
                                                                                                                                                                              Neuron Number                                                                         Neuron Number
 has reached this minimum the learned pattern is recalled. The en-
 ergy is defined as                                                                                          We imposed uniform and graded firing patterns on the network by
           1                                                  External Inputs                               selecting the distribution of the recurrent weight for each of the deci-
    E =−       (yi − < y >)(yj − < y >)                                                                     sion pools. To achieve a uniform firing pattern, weights were all set
           2
               ij                             Recurrent firing
                                                            yj
                                                                              Dendrites                     to the same value w+ = 2.1. Graded firing patterns were achieved
                                                                                             Recurrent

 where yi is the firing of the ith neu-                                        wij            collateral
                                                                                                            by conforming weights to a discrete exponential-like distribution with
 ron, < y > is the population’s mean fir-                                                     synapses       mean value w+ ≈ 2.1.
 ing rate. Fixed points in attractor net-                                                    Cell bodies


 works can correspond to a spontaneous                                                   Output firing
                                                                                                                                   Results
 state (where all neurons have a low fir-                                                     yi
                                                                                                            Graded simulations were more likely to jump to a decision
 ing rate), or a persistent state in which a                                                                early.            This could be caused by decreased stability of the
 subset of neurons have a high firing rate.                                                                  spontaneous state.                                 Changes in reaction time distributions
                                                                                                            are statistically significant and the decrease in reaction time
   Network Dynamics                                                                                         is robust across different firing rates of the winning pool.
                                                                                                                                                                 Variability in the system increases when
 Neurons in simulations use Integrate-and-Fire (IF) dynamics to de-                                                      Reaction Times vs Firing Rates
                                                                                                              1100                                               graded patterns are introduced. Here
 scribe the membrane potential of neurons. We chose biologically
                                                                                                              1000                                               we use the Fano factor to compute trial
 realistic constants to obtain firing rates that are comparable to ex-
                                                                                                            Reaction Time (msec)




                                                                                                               900                                               to trial variability of membrane potentials
 perimental measurements of neural activity. IF neurons integrate                                              800                                               across simulations. The Fano factor is
 synaptic current into a membrane potential, and then fire when the                                             700                                               calculated from the variance in the poten-
 membrane potential reaches a voltage threshold.                                                               600      Graded Simulations
                                                                                                                        Uniform Simulations
                                                                                                                                                                 tial measured in a window with temporal
 The synaptic current flowing into each neuron is described in terms                                            500
                                                                                                                 26 27 28 29 30 31 32 33 34
                                                                                                                    Winning Pool Final Second Firing Rate (Hz)   length T and expressed as a function of
 of neurotransmitter components. The four families of receptors used
                                                                                                                                                                 time,
 are GABA, NDMA, AMPArec , and AMPAext . The neurotransmitter re-                                                                                     Average Fano Factor of Membrane Potential
                                                                                                                            0.005                                                                                                                   Tr
 leased from a presynaptic excitatory neuron are AMPArec and NMDA,                                                                                                                                                                                       [Vi,n (T ) − Vi (T ) ]2
 while inhibitory neurons transmit GABA currents. Each neuron re-                                                           0.004
                                                                                                                                                                                                                       F (T ) = n                                                              ,
 ceives external input through a spike train modeled by a Poisson pro-                                                      0.003
                                                                                                           Fano Factor




                                                                                                                                                                                                                                   Vi (T )
 cess with rate λi = 3.0Hz.                                                                                                 0.002
                                                                                                                                                                                                             where Vi (T ) is the average potential of
 Synaptic current flowing into a neuron is given by the following equa-                                                      0.001                           Graded Simulations                               neuron i in the time window, and Tr is the
 tion, where each term on the RHS refers to the current from one class                                                                                      Uniform Simulations
                                                                                                                            0.000 0.5                      1.0   1.5      2.0 2.5 3.0    3.5      4.0        number of trials.
 of neurotransmitter,                                                                                                                                                  Time (seconds)


        Isyn (t) = IGABA(t) + INDMA(t) + IAMPA,rec (t) + IAMPA,ext (t)                                                             Conclusion
                                                                                                              The transition time to an attractor state, or reaction time, is
   Architecture                                                                                               decreased when neurons fire in a more biologically realistic
 We structure the network by establishing the strength of interactions                                        pattern.
 between two decision pools, D1 & D2, to be values that could occur                                           There is greater variability in the system’s states over time when
 through associative learning.                                                                                graded patterns are introduced.
                                                                                                            We state that increased variance in synaptic input to each neuron can
                                                  Non-Specific     1                                         be thought of as increased noise in the system. Conceptually, graded
               Inhibitory       Excitatory
                Neurons         Neurons
                                                                      Blowup showing sub-populations
                                                                      of exictatory neurons
                                                                                                            patterns are more noisy because recurrent synaptic input to neurons
                                             w+
                                                  D1
                                                       w−
                                                            D2
                                                                 w+
                                                                                                            will vary across the population.
                                                                                                            As neural networks become larger, noise will invariably become
                                                                                                            lower. However, when we consider the situation in brain, even though
 Neurons in the same decision pool are connected to each other with                                         the network is large, there is still significant noise in the system. We
 an strong average weight w+, and are connected to the other excita-                                        present the hypothesis that this noise is due in part to graded firing
 tory pools with an weak average weight w−.                                                                 pattens. Further work will explore this analytically.

Complexity DTC - University of Warwick - Coventry, UK                               Mail: tristan.webb@warwick.ac.uk                                                                              WWW: http://warwick.ac.uk/go/tristanwebb

More Related Content

DOC
Image+processing
PDF
Image Compression Using Wavelet Packet Tree
PDF
Hq3114621465
PDF
5 saso2012-presentation
PDF
Poster Toward a realistic retinal simulator
PDF
Hierarchy of visual cortex models
PDF
Thesis presentation
PDF
Nishimoto Interspeech 2010 v3
Image+processing
Image Compression Using Wavelet Packet Tree
Hq3114621465
5 saso2012-presentation
Poster Toward a realistic retinal simulator
Hierarchy of visual cortex models
Thesis presentation
Nishimoto Interspeech 2010 v3

Viewers also liked (6)

PDF
Dynamic Kohonen Network for Representing Changes in Inputs
PPT
week9_Machine_Learning.ppt
PPTX
PDF
Place Cell Latex report
PDF
Viewing and editing different versions of a wiki
PPTX
My Three Ex’s: A Data Science Approach for Applied Machine Learning
Dynamic Kohonen Network for Representing Changes in Inputs
week9_Machine_Learning.ppt
Place Cell Latex report
Viewing and editing different versions of a wiki
My Three Ex’s: A Data Science Approach for Applied Machine Learning
Ad

Similar to Graded Patterns in Attractor Networks (20)

PPTX
neural-networks (1)
PPT
Ann by rutul mehta
PPT
Ann ics320 part4
PDF
Self Organinising neural networks
PPTX
HOPFIELD NETWORK
PDF
Jennie Si: "Computing with Neural Spikes"
KEY
Mechanisms of bottom-up and top-down processing in visual perception
PDF
The Back Propagation Learning Algorithm
PPTX
Short presentation about my thesis
PDF
Artificial neural networks
PDF
Models of neuronal populations
PPT
Neuralnetwork 101222074552-phpapp02
PDF
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
PDF
Arquitecturas Basicas Slides
PPTX
Unit iii update
PPTX
Artificial neural network
PDF
Neural Pulsars: Kevin Haywood
PDF
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
DOCX
Som paper1.doc
PDF
neural-networks (1)
Ann by rutul mehta
Ann ics320 part4
Self Organinising neural networks
HOPFIELD NETWORK
Jennie Si: "Computing with Neural Spikes"
Mechanisms of bottom-up and top-down processing in visual perception
The Back Propagation Learning Algorithm
Short presentation about my thesis
Artificial neural networks
Models of neuronal populations
Neuralnetwork 101222074552-phpapp02
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...
Arquitecturas Basicas Slides
Unit iii update
Artificial neural network
Neural Pulsars: Kevin Haywood
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
Som paper1.doc
Ad

Recently uploaded (20)

PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Machine learning based COVID-19 study performance prediction
PDF
Electronic commerce courselecture one. Pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Getting Started with Data Integration: FME Form 101
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
Machine Learning_overview_presentation.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
Big Data Technologies - Introduction.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Encapsulation theory and applications.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
Encapsulation_ Review paper, used for researhc scholars
Machine learning based COVID-19 study performance prediction
Electronic commerce courselecture one. Pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Getting Started with Data Integration: FME Form 101
MYSQL Presentation for SQL database connectivity
Dropbox Q2 2025 Financial Results & Investor Presentation
A comparative analysis of optical character recognition models for extracting...
Machine Learning_overview_presentation.pptx
Assigned Numbers - 2025 - Bluetooth® Document
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Big Data Technologies - Introduction.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Encapsulation theory and applications.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Spectral efficient network and resource selection model in 5G networks

Graded Patterns in Attractor Networks

  • 1. Graded Patterns in Attractor Networks Tristan Webb Supervisor: Jianfeng Feng Co-Supervisor: Edmund Rolls Complexity Science DTC, Computational Biology Research Group University of Warwick Summary We demonstrate how noise can exist in a neural network as large as the brain. Graded firing patterns allow us to tune noise levels in the engineering of neural networks. The levels of noise in the brain may change with age and play a functional role in the retrieval of memory. Attractor Neural Networks Graded Patterns Neural coding, and its relationship to behavior, is heavily researched The network was simulated numerically for a time period of four sec- in many areas of neuroscience. Attractor networks are a demonstra- onds. We present the network two different periods of different exter- tion of how decisions, memories, and other cognitive representations nal stimulus levels: first a base period, and later a cue period. During can be encoded in a firing pattern (or set of active neurons) in a neu- the cue period the qualitative firing pattern in the network is sporadic ral network. and uneven. When cues are applied, the firing rate for the neurons in An attractor network receives sensory information to the network a winning decision pool is raised to through positive feedback, while through connections known as synapses. The network is character- the other pool is suppressed through increased inhibition. ized by recurrent collateral synapses providing feedback to neurons. Uniform Graded Recurrent synaptic activity will cause the firing patterns in the network Final Second Mean Neuron Rates Final Second Mean Neuron Rates 60 60 to persist even after the input is removed. Winning Pool Winning Pool Learning occurs through the modification of synaptic strengths (wij , 50 Losing Pool 50 Losing Pool where i is the ith neuron and j is the jth synapse). An associative 40 40 Firing Rate (Hz) Firing Rate (Hz) learning (Hebbian) rule can create the correct structure for the re- 30 30 call of information. This type of learning strengthens connections 20 20 between neurons that are simultaneously active. 10 10 The network dynamics can be thought of as a gradient descent to- wards a local minimum in an energy landscape. When the network 00 5 10 15 20 25 30 35 40 00 5 10 15 20 25 30 35 40 Neuron Number Neuron Number has reached this minimum the learned pattern is recalled. The en- ergy is defined as We imposed uniform and graded firing patterns on the network by 1 External Inputs selecting the distribution of the recurrent weight for each of the deci- E =− (yi − < y >)(yj − < y >) sion pools. To achieve a uniform firing pattern, weights were all set 2 ij Recurrent firing yj Dendrites to the same value w+ = 2.1. Graded firing patterns were achieved Recurrent where yi is the firing of the ith neu- wij collateral by conforming weights to a discrete exponential-like distribution with ron, < y > is the population’s mean fir- synapses mean value w+ ≈ 2.1. ing rate. Fixed points in attractor net- Cell bodies works can correspond to a spontaneous Output firing Results state (where all neurons have a low fir- yi Graded simulations were more likely to jump to a decision ing rate), or a persistent state in which a early. This could be caused by decreased stability of the subset of neurons have a high firing rate. spontaneous state. Changes in reaction time distributions are statistically significant and the decrease in reaction time Network Dynamics is robust across different firing rates of the winning pool. Variability in the system increases when Neurons in simulations use Integrate-and-Fire (IF) dynamics to de- Reaction Times vs Firing Rates 1100 graded patterns are introduced. Here scribe the membrane potential of neurons. We chose biologically 1000 we use the Fano factor to compute trial realistic constants to obtain firing rates that are comparable to ex- Reaction Time (msec) 900 to trial variability of membrane potentials perimental measurements of neural activity. IF neurons integrate 800 across simulations. The Fano factor is synaptic current into a membrane potential, and then fire when the 700 calculated from the variance in the poten- membrane potential reaches a voltage threshold. 600 Graded Simulations Uniform Simulations tial measured in a window with temporal The synaptic current flowing into each neuron is described in terms 500 26 27 28 29 30 31 32 33 34 Winning Pool Final Second Firing Rate (Hz) length T and expressed as a function of of neurotransmitter components. The four families of receptors used time, are GABA, NDMA, AMPArec , and AMPAext . The neurotransmitter re- Average Fano Factor of Membrane Potential 0.005 Tr leased from a presynaptic excitatory neuron are AMPArec and NMDA, [Vi,n (T ) − Vi (T ) ]2 while inhibitory neurons transmit GABA currents. Each neuron re- 0.004 F (T ) = n , ceives external input through a spike train modeled by a Poisson pro- 0.003 Fano Factor Vi (T ) cess with rate λi = 3.0Hz. 0.002 where Vi (T ) is the average potential of Synaptic current flowing into a neuron is given by the following equa- 0.001 Graded Simulations neuron i in the time window, and Tr is the tion, where each term on the RHS refers to the current from one class Uniform Simulations 0.000 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 number of trials. of neurotransmitter, Time (seconds) Isyn (t) = IGABA(t) + INDMA(t) + IAMPA,rec (t) + IAMPA,ext (t) Conclusion The transition time to an attractor state, or reaction time, is Architecture decreased when neurons fire in a more biologically realistic We structure the network by establishing the strength of interactions pattern. between two decision pools, D1 & D2, to be values that could occur There is greater variability in the system’s states over time when through associative learning. graded patterns are introduced. We state that increased variance in synaptic input to each neuron can Non-Specific 1 be thought of as increased noise in the system. Conceptually, graded Inhibitory Excitatory Neurons Neurons Blowup showing sub-populations of exictatory neurons patterns are more noisy because recurrent synaptic input to neurons w+ D1 w− D2 w+ will vary across the population. As neural networks become larger, noise will invariably become lower. However, when we consider the situation in brain, even though Neurons in the same decision pool are connected to each other with the network is large, there is still significant noise in the system. We an strong average weight w+, and are connected to the other excita- present the hypothesis that this noise is due in part to graded firing tory pools with an weak average weight w−. pattens. Further work will explore this analytically. Complexity DTC - University of Warwick - Coventry, UK Mail: tristan.webb@warwick.ac.uk WWW: http://warwick.ac.uk/go/tristanwebb