Multiple task learning in recurrent neural networks using layered reservoirs

In this paper, to learn multiple tasks sharing same inputs, a two-layer architecture for a reservoir based recurrent neural network is proposed. The inputs are fed into the general workspace layer where the weights are adapted to provide maximum information for the next layers in an unsupervised approach. The outputs from this layer drive the task specific layers where the corresponding weights are optimized in a supervised process concerning the output error for each given task. Simulation results on a two-task example show that the proposed layered scheme outperforms other approaches, namely, the one with multiple readouts on a single reservoir and the one with two separate reservoirs.


Published in:
Proceedings of the 17-th International Workshop on Nonlinear Dynamics of Electronic Systems (NDES 2009) , 62-65
Presented at:
17-th International Workshop on Nonlinear Dynamics of Electronic Systems (NDES 2009), Rapperswil, Switzerland, June 21-24, 2009
Year:
2009
Publisher:
Zurich
Laboratories:




 Record created 2009-09-03, last modified 2018-03-17

External link:
Download fulltext
URL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)