Files

Abstract

Networks are everywhere and we are confronted with many networks in our daily life. Networks such as Internet, World Wide Web, social, biological and economical networks have been subject to extensive studies in the last decade. The volume of publications and investments are sharply increasing and the applications are spreading widely over various disciplines. One of the most important problems in this context, which is the subject of interest of this thesis, is how the information is processed in a dynamical network and how a dynamical network can be exploited to solve information processing tasks. To study the problem more systematically, the information processing dynamical networks (IPDNs) are divided into two sub categories namely, (i) autonomous IPDNs in which the input information is static and is given as the initial state of the network and (ii) non-autonomous IPDNs in which the input information is injected by an input signal. In this thesis, synchronizability of dynamical networks, community detection on dynamical networks and skill acquisition in reinforcement learning (RL) using dynamical networks have been considered as the problems for the category of autonomous IPDNs. Furthermore, adaptation of the internal weights of a special type of recurrent neural networks, so called reservoir-based RNN (RRNN), and multi tasking in RRNNs have been investigated in the non-autonomous IPDNs category. First, we discuss the interplay between the structure of a complex network and its dynamical behavior and in particular the synchronizability of the network. There are several interpretations of synchronizability in the literature which are categorized in four main groups that do not coincide in general. One of these interpretations uses the second smallest eigenvalue of the graph Laplacian, the so called algebraic connectivity, which is an important measure for various applications in dynamical networks. We introduce a new lower bound for the algebraic connectivity and prove that it is always larger or equal to the previously published lower bounds. In addition, the proposed bound helps to explain the most effective parameters for the synchronizability of the network from the algebraic connectivity point of view. Having found the most influential parameters, we propose two algorithms for enhancing synchronizability of a given network by efficient rewiring of the links and by assigning proper weights to the links, respectively. The rewiring algorithm converts an arbitrary graph, irrespective of the topology, to a class of networks, so called super democratic networks that are identified by their homogenous degree and load distribution and their very low average path length. On the other hand, the weighting algorithm utilizes a novel graph theoretic measure, the so called neighboring graph to determine the proper weights of the links. The optimality of the weighting algorithm is proved and it is shown by numerical simulation on the benchmark networks that it outperforms the previously published approaches. In addition, introducing an iterated map that is associated with each node of the network, we propose a general algorithm for the community detection problem on the complex networks. We prove that there is a direct relation between the global optimum of a general energy function and the asymptotically stable fixed points of the system of the coupled iterated maps. Interestingly, we show that the optimization of this energy function can be reduced to well-known community detection algorithms when the parameters of the energy function are set through appropriate heuristics. The proposed algorithm is very fast, i.e. linear in the size of the network for sparse networks, and very accurate. Furthermore, it is capable of discovering overlapping communities. Next, we develop a new algorithm for automatically skill acquisition of an intelligent agent that learns a task using the Reinforcement Learning approach in a partially observable environment. Mapping the agent's transition history to a graph, one can utilize the complex network framework to solve the problem. Introducing a new betweenness centrality measure, the so called node connection graph stability centrality, the bottlenecks of the corresponding graph, under some conditions, can be interpreted as sub-goals of the agent in the environment which facilitates and accelerates its exploration. Eliminating redundant sup-goals and injecting useful prior information about the environment, the agent that performs the proposed algorithm, is significantly faster in learning the optimal strategy. For the case of non-autonomous IPDNs, we focus on reservoir-based recurrent neural networks. In this class of RNNs, considering a random, large and sparse network as the hidden layer, called reservoir, only the weights of the output layer are trained. Although the weights of the reservoir in RRNNs are usually assumed fixed, it turns out that adapting the hidden layer can enhance the performance significantly. Accordingly, we show how the structural properties of the reservoir can be exploited to enhance the performance of a task on RRNNs. One can create a large weight matrix of the reservoir using iterative Kronecker products of small size matrices. In this way, the number of parameters of the reservoir decreases significantly and allows for optimization by adjusting the elements of the small size matrices. Although the space of Kronecker-based weight matrices is much smaller than the space of the unconstrained random matrices, it is shown through numerical simulations on benchmark tasks that the corresponding space of Kronecker-based matrices is still useful to find "good enough" reservoirs with much lower complexity. Furthermore, the problem of learning multiple tasks that share the same input in reservoir computing methods was investigated. To this end, a special reservoir architecture that consist of two layers, namely, the general workspace (GW) and the task specific (TS) layers, is proposed. The shared inputs are fed into the GW layer and the reservoirs of the TS layer are driven by the outputs of the GW. Any task-specific configuration, e.g. transfer function, training parameters, and additional inputs should be implemented in the corresponding reservoirs of the TSlayer. The presence of two separate layers makes it possible to apply different and independent adaptation methods; more precisely, unsupervised training was applied to the GW to maximize the relevant information passing to the TS layer and supervised training of the reservoirs of the TS layer to minimize the output error of the corresponding task. By considering a two-task problem, it is shown that the layered model outperforms two other approaches, namely the one with a single reservoir and the other with two separate reservoirs, one for each task.

Details

Actions