On the Robustness of Perceptron Learning Recurrent Networks

This paper extends a recent time-domain feedback analysis of Perceptron learning networks to recurrent networks and provides a study of the robustness performance of the training phase in the presence of uncertainties. In particular. a bound is established on the step-size parameter in order to guarantee that the training algorithm will behave as a robust filter in the sense of H∞ -theory. The paper also establishes that the training scheme can be interpreted in terms of a feedback interconnection that consists of two major blocks: a time-variant lossless (i.e., energy preserving) feedforward block and a time-variant dynamic feedback block. The l2-stability of the feedback structure is thell analyzed by using the small-gain and the mean-value theorems.


Published in:
IFAC Proceedings Volumes, 29, 1, 4172-4177
Presented at:
13th IFAC World Congress, San Francisco, CA, USA
Year:
1996
Publisher:
Elsevier
ISSN:
1474-6670
Laboratories:




 Record created 2017-12-19, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)