Stochastic variational learning in recurrent spiking networks
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Funded by the European community under ERC and BrainScaleS and by the Swiss National Science Foundation (SNSF)
Financial support was provided by the Swiss National Science Foundation(SystemsX)aswellasbytheEuropeanResearch Council (Grant Agreement no. 268 689), and the European Community’s Seventh Framework Program (Grant Agreement no. 269921, BrainScaleS)
Record created on 2014-05-19, modified on 2016-08-09