Stochastic variational learning in recurrent spiking networks

The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.


Published in:
Frontiers In Computational Neuroscience, 8
Year:
2014
Publisher:
Lausanne, Frontiers Research Foundation
ISSN:
1662-5188
Keywords:
Note:
Funded by the European community under ERC and BrainScaleS and by the Swiss National Science Foundation (SNSF)
Financial support was provided by the Swiss National Science Foundation(SystemsX)aswellasbytheEuropeanResearch Council (Grant Agreement no. 268 689), and the European Community’s Seventh Framework Program (Grant Agreement no. 269921, BrainScaleS)
Laboratories:




 Record created 2014-05-19, last modified 2018-10-01

n/a:
Download fulltext
PDF

Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)