Rupp, M.Sayed, Ali H.2017-12-192017-12-192017-12-19199610.1016/S1474-6670(17)58335-4https://infoscience.epfl.ch/handle/20.500.14299/143446This paper extends a recent time-domain feedback analysis of Perceptron learning networks to recurrent networks and provides a study of the robustness performance of the training phase in the presence of uncertainties. In particular. a bound is established on the step-size parameter in order to guarantee that the training algorithm will behave as a robust filter in the sense of H∞ -theory. The paper also establishes that the training scheme can be interpreted in terms of a feedback interconnection that consists of two major blocks: a time-variant lossless (i.e., energy preserving) feedforward block and a time-variant dynamic feedback block. The l2-stability of the feedback structure is thell analyzed by using the small-gain and the mean-value theorems.On the Robustness of Perceptron Learning Recurrent Networkstext::conference output::conference proceedings::conference paper