Conference paper

Multiple task learning in recurrent neural networks using layered reservoirs

In this paper, to learn multiple tasks sharing same inputs, a two-layer architecture for a reservoir based recurrent neural network is proposed. The inputs are fed into the general workspace layer where the weights are adapted to provide maximum information for the next layers in an unsupervised approach. The outputs from this layer drive the task specific layers where the corresponding weights are optimized in a supervised process concerning the output error for each given task. Simulation results on a two-task example show that the proposed layered scheme outperforms other approaches, namely, the one with multiple readouts on a single reservoir and the one with two separate reservoirs.


  • There is no available fulltext. Please contact the lab or the authors.

Related material