Files

Abstract

The present work belongs to the vast body of research devoted to behaviors that emerge when homogeneous or heterogeneous agents interact. We adopt a stylized point of view in which the individual agents' activities can be assimilated into nonlinear dynamical systems, each with their own set of specific parameters. Since the pioneering work of C. Huyghens in the seventeenth century it has been established that interactions between agents modify their individual evolutions – and that for ad-hoc interactions and agents that are not too dissimilar, synchronized behaviors emerge. In this classical approach, however, each agent recovers its individual evolution when interactions between them are removed or as summarized by a French aphorism:"Chasser le naturel et il revient au galop". The position we adopt in this work differs qualitatively from this classical approach. Here, we construct a mathematical framework that depicts the idea of systems interacting not only via their state variables, but also via a self-adaptive capability of the agents' local parameters. Specifically, we consider a network where each vertex is endowed with a dynamical system having initially different parameters. We explicitly construct adaptive mechanisms which, according to the system's state, tune the value of the local parameters. In our construction, the agents are modeled by dissipative ortho-gradient vector fields possessing local attractors (e.g. limit cycles). The forces describing the agents' interactions derive either from a generalized potential or from a linear combinations of coupling functions. Contrary to classical synchronization behavior which disappears when interactions are removed, here the system self-adapts and acquires consensual values for the set of local parameters. The consensual values are definitely "learned" (i.e. they stay in consensus even when interactions are removed). We analytically show for a wide class of dynamical systems how such a "plastic" and self-adaptive training of parameters can be achieved. We calculate the resulting consensual state and their relevant stability issues. The connectivity of the network (i.e. Fiedler number) affects the convergence rate but not the asymptotic consensual values. We then extend this idea to enable adaptation of parameters characterizing the coupling functions themselves. Self-learning mechanisms simultaneously operate at the agents' level and at the level of their connections. Finally, we analytically explore a set of dynamical systems involving the simultaneous action of two time-dependent networks (i.e. where edges evolve with time). The first network describes the interactions between the state variables, and the second affects the adaptive mechanisms themselves. In this last case, we show that for ad hoc time-dependent networks, parametric resonance phenomena occur in the dynamics. While our work puts a strong effort into explicit derivations and analytic results, we do not refrain from reporting a set of numerical investigations that show how our explicit construction can be implemented in various classes of dynamical systems.

Details

Actions

Preview