Unconstrained Learning of Networked Nonlinear Systems via Free Parametrization of Stable Interconnected Operators
This paper characterizes a new parametrization of nonlinear networked incrementally L2 -bounded operators in discrete time. The distinctive novelty is that our parametrization is free - that is, a sparse large-scale operator with bounded incremental L2 gain is obtained for any choice of the real values of our parameters. This property allows one to freely search over optimal parameters via unconstrained gradient descent, enabling direct applications in large-scale optimal control and system identification. Further, we can embed prior knowledge about the interconnection topology and stability properties of the system directly into the large-scale distributed operator we design. Our approach is extremely general in that it can seamlessly encapsulate and interconnect state-of-the-art Neural Network (NN) parametrizations of stable dynamical systems. To demonstrate the effectiveness of this approach, we provide a simulation example showcasing the identification of a networked nonlinear system. The results underscore the superiority of our free parametrizations over standard NN-based identification methods where a prior over the system topology and local stability properties are not enforced.
2-s2.0-85200574298
2024
9783907144107
651
656
REVIEWED
EPFL
Event name | Event acronym | Event place | Event date |
Stockholm, Sweden | 2024-06-25 - 2024-06-28 | ||