UZH ETH UZH|ETH
  Zurich

Skip to content. | Skip to navigation

Sections
Personal tools
You are here: Home Projects neuroP Impact of Parameter Variability on System Level

Impact of Parameter Variability on System Level

by Lorenz Muller — last modified Nov 28, 2012 12:28 AM

The next step from assessing the impact of parameter variability on a neuron level, is to study it at a system or network level: How well can the network still perform its function given the parameters of its neurons are drawn from a certain probability distribution.

As an example we consider a WTA network that needs to choose a winner in an equal and fair competition and sustain activity at the position of the winner. To consider performance under mismatch we look at the decision bias, at the failure rate of memory retention and at occurrences of instabilities. 

Another example we studied are the Relational Nets of Cook, Jug, Krautz and Steger 2010. Two WTA populations were linked by plastic synapses (rather than using the original rule, we used Brader, Senn, Fusi 2007 that we have implemented in aVLSI). The plastic synapses learn the relation between the inputs to the two WTA populations (in the figure below a x=-y^2 function). In the case below we averaged over multiple (all-to-all) connected populations: In this setup parameter variability may even be an advantage, because it makes different populations more sensitive to different areas of the input, so that the whole population is more robust to input noise.

Relational_Fuislearning

Figure 1:

(TR) Rasterplot, (TL) membrane potential of neuron 0, (BR) Excitatory and Inhibitory population rates, (BL) weights of the plastic synapses at the end of learning.