A feedback analysis of perceptron learning for neural networks
This paper provides a time-domain feedback analysis of the perceptron learning algorithm. It studies the robustness performance of the algorithm in the presence of uncertainties that might be due to noisy perturbations in the reference signal or to modeling mismatch. In particular, bounds are established on the step-size parameter in order to guarantee that the resulting algorithm will behave as a robust filter in the sense of H/sup /spl infin//-theory. The paper also establishes that an intrinsic feedback structure can be associated with the training scheme. The feedback configuration is motivated via energy arguments and is shown to consist of two major blocks: a time-variant lossless (i.e. energy preserving) feedforward path and a time-variant feedback path. The stability of the feedback structure is then analyzed via the small gain theorem and choices for the step-size parameter in order to guarantee faster convergence are further derived by appealing to the mean-value theorem. Simulation results are included to validate the findings.
1995
2
894
898
REVIEWED
Event name | Event place | Event date |
Pacific Grove, CA, USA | October 30 - November 1, 1995 | |