UZH ETH UZH|ETH
  Zurich

Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Projects neuroP Simulation of Attractor recurrent networks in Winner-Take-all Architectures

Simulation of Attractor recurrent networks in Winner-Take-all Architectures

by Federico Corradi — last modified Dec 16, 2013 02:28 PM

Introduction

 

The study of the collective dynamics of multiple neural populations with attractor states has been the subject of a good deal of investigation. This class of network is considered a basic building block of many different neural systems. In particular, reverberating states of cortical activity are thought to underlie various cognitive processes and functions.

 

 

key words: attractor networks, spiking neurons, brian simulation, decision making.

 

Attractor network architecture

 

We simulate, using Brian simulator, a network composed of 1000 neurons with clustered connections, we used 850 neurons in the excitatory population and 150 neurons in the inhibitory population. In the next figure we show network topology. J represents mean synaptic efficacy, while C indicates connectivity levels.

 

 

ANN logbook1

 

Figure 1- Attractor Network Architecture

 

 

Transfer Function

 

Network topology and parameters have been suggested by calculations in mean-field approximation.  In the next figure we plot the effective transfer function for neurons in the excitatory population.

etf1

 

 

 

Figure 2 - Effective transfer function

 


The stimulation protocol is divided into three phases during which everything remain unchanged except for the mean frequency of the spike emitted by the input population. In the first phase, lasting  $250 ms$, no input stimulus is provided and the input population emits spikes at low firing rate ($1.5 Hz$, 0 s<t<0.25 s). In the second phase, lasting $250$ ms (0.25s<t<0.5s), we provide a mean input frequency of about $15 Hz$. During the third phase (0.5s<t<0.8s), we remove the stimulus from the input of the network and we provide $1.5 Hz$ noise. 

The response of the network is reported in figure \ref{nas} : the network starts in its lower meta-stable state, jumps at high frequencies during the second phase (when we provide a cue stimulus of  $15$ Hz), and relaxes to its higher meta-stable state in the last phase. The cue stimulus has provided the necessary energy to jump from the lower state $N1=0.5$Hz to the upper state $N3 = 110$Hz  as predicted by the ETF (figure \ref{etf}). Thanks to the massive positive feedback the network is able to self-sustain its activity at high frequencies. This demonstrate the validity of the ETF and the stability of the two states (down and up).

 

att1

Figure 3 -  Attractor response to a cue stimulus. In green PSTH of inhibitory population while in blu is the PSTH of the excitatory population. The stimulation protocol is divided in 3 phases: a) we provide noise for $250$ ms, b) we provide the cue stimulus of $15$ Hz for $250 < t < 450$ ms, c) we provide noise for $t  > 450 $ ms

 

 

Attractors in winner-take-all architecture

 

In this section we discuss a possible Winner-Take-All attractor network architecture. We use the same base network described in figure \ref{atttop} as a module in order to build a more complicated network that exhibit Winner-Take-All dynamics.
The basic idea is to have two attractor states that are in competition with each other.

 


wta

Figure 4 - Attractor Winner-Take-All architecture

 

 

For that reason we use inhibitory connections between two excitatory populations (Ge and Xe) of neurons. Those inhibitory connections can be seen as long distance inhibitory connections, while the short distance connections are excitatory. In figure 
\ref{wta-att-nt} we report the connectivity and the efficacy for each of those connections.

We successfully obtained a winner-take-all behavior, the stimulation protocol  is shown at the bottom of figure \label{wta-beh}. The input pulse stimulus has been first provided to Xe population, from 0.15 s to 0.3 s. At the removal of the input stimulus (0.3 s),  the population Xe relaxes into a reverberant state of activity (i.e. attractor). We then provide an input stimulus to the population Ge, this stimulus effects the activity of Xe population via inhibitory connections and it lowers Xe activity (freq Xe ~ 1 Hz @ 0.5), while Ge jumps in its high stable state of activity at about $200$ Hz. 

 

 

wtaatt

Figure  5 - PSTH of a Attractor WTA architecture

 

We now use a different stimulation protocol in which both populations share the same mean firing rate input frequency. To do so, we connected the input stimulus to both the excitatory populations (Xe and Ge) and we use the same total amount of synapses as in the previous simulation. Before the input population was counting $1000$ neurons connected with $c_{e-in} = 0.5$, now we connect the input population with $c_{in-xe} = 0.25$ and $c_{in-ge}=0.25$.  In this way the input will be equally spread among the two pools of excitatory neurons: Xe and Ge. In figure \ref{wta-nd-c} every dot represents the mean firing rate frequency for both populations. Along x-axes we plot the mean firing rate frequency of population $Xe$ and along y-axes we plot the mean firing rate of $Ge$ population.

 

decision

Figure 6 -  Network dynamics. We provide the same input to both excitatory populations (Xe and Ge). Network activity in the first transient is moved by the external stimulation around the central cluster of dots were both the populations are firing around $150$ Hz. After a transient period, one of the two populations win the competition and shows persistent activity around 250 Hz.