Files

Abstract

In the domains of machine learning, data science and signal processing, graph or network data, is becoming increasingly popular. It represents a large portion of the data in computer, transportation systems, energy networks, social, biological, and other scientific applications. Often, such data is physically distributed over different network nodes, and there is a communication cost involved with bringing it to a central unit for processing and analysis. Decentralized algorithms offer solutions to deal with network data and relax communication costs, with nodes sharing messages over communication channels in order to jointly implement data processing or learning tasks. However, messages are typically quantized in practice and represented by a finite number of bits in digital communication channels. As a result, imperfections in received signals may accumulate and eventually degrade the algorithm's overall performance. This thesis focuses on designing new methods to efficiently allocate bits in the different steps of messages exchanges between network nodes when implementing distributed graph signal tasks. First, we consider graph filters that can decompose and shape graph signal frequency components, in order to realize a desired response. Distributed graph filters can be used in applications such as smoothing, denoising and semi-supervised learning. We propose an optimal bit allocation technique that adapts to the network topology and the message importance, such that it minimizes the quantization error. Second, we consider distributed graph neural networks that can be used in applications such as anomaly detection, decentralized control and traffic prediction. We study the effect of quantization in the gnn inference stage and we propose an analytical solution to an optimized bit allocation problem, by solving the corresponding Karush-Kuhn-Tucker (KKT) system of equations. Our method is shown to be beneficial in reducing the error due to quantization, compared to other baselines, on the tasks of distributed denoising and distributed source localization. The optimized bit allocation gives a higher relevance to messages in the middle layers of the neural network model. Finally, we consider the distributed graph learning problem whose objective is to infer an unknown data graph from network observations, in order to enable further processing tasks or interpretability. We propose a novel distributed graph learning algorithm under the assumption that the data is smooth on the data graph. With the use of local projection constraints, we solve the distributed optimization problem and infer a valid graph. For the same accuracy, our distributed algorithm has a lower communication cost compared to a centralized version, especially for sparse networks. Additionally, we propose a bit allocation scheme for the distributed graph learning algorithm. We show that the scheme presents a better accuracy and bit cost trade-off than a baseline uniform bit allocation scheme. Overall, this thesis proposes novel bit allocation techniques for signal quantization in distributed implementations of signal processing and machine learning tasks. We believe that our research efforts will hasten the development of intelligent distributed processing algorithms for network data that balance performance, communication bandwidth, and computational complexity, in a wide range of potential applications in social, sensor, energy, transportation, and other fields.

Details

PDF