Evolving Neuromodulatory Topologies for Reinforcement Learning-like Problems

Environments with varying reward contingencies constitute a challenge to many living creatures. In such conditions, animals capable of adaptation and learning derive an advantage. Recent studies suggest that neuromodulatory dynamics are a key factor in regulating learning and adaptivity when reward conditions are subject to variability. In biological neural networks, specific circuits generate modulatory signals, particularly in situations that involve learning cues such as a reward or novel stimuli. Modulatory signals are then broadcast and applied onto target synapses to activate or regulate synaptic plasticity. Artificial neural models that include modulatory dynamics could prove their potential in uncertain environments when online learning is required. However, a topology that synthesises and delivers modulatory signals to target synapses must be devised. So far, only handcrafted architectures of such kind have been attempted. Here we show that modulatory topologies can be designed autonomously by artificial evolution and achieve superior learning capabilities than traditional fixed-weight or Hebbian networks. In our experiments, we show that simulated bees autonomously evolved a modulatory network to maximise the reward in a reinforcement learning-like environment.

Published in:
Proceedings of the 2007 IEEE Congress on Evolutionary Computation, 2471-2478
Presented at:
IEEE Congress on Evolutionary Computation (CEC 2007) 25-28 Sept. 2007, Singapore, September 25-28, 2007
IEEE Press

 Record created 2007-05-22, last modified 2018-03-17

Download fulltext

Rate this document:

Rate this document:
(Not yet reviewed)