The social learning paradigm for distributed hypothesis testing involves a collection of agents interacting over a graph, each observing streaming data that provide evidence about the state of the environment. These agents perform a series of local Bayesian updates and belief fusion steps, where they combine their own beliefs with those of their immediate neighbors, thus facilitating the collective learning of the true state of nature. This process mimics the essence of human decision-making, where individuals gather their own insights and consult with trusted peers to reach more confident conclusions. Social learning models are applicable in a wide range of contexts, including sensor networks, distributed machine learning, and the modeling of opinion dynamics.
This dissertation concentrates on the interpretability aspect of social learning, addressing the inverse problem of inferring underlying network information from observed belief sequences. By observing the public beliefs shared among agents, we infer key properties of the network, such as the combination policy representing peer-to-peer trust levels, the informativeness of each agent's data, and the most influential agents. These insights can significantly improve the understanding of multiagent system dynamics.
One significant application of the approach developed herein is in the analysis of opinion dynamics over social media platforms. By modeling the opinion dynamics process using a social learning framework, we can interpret the shared beliefs and uncover hidden social influences. For instance, in platforms like X, users openly exchange opinions, and these interactions can be analyzed to determine which users contribute the most to decision-making, identify influential users, and detect malicious behavior that may be present in the network. The methodology developed in this thesis provides a solid mathematical foundation for understanding social influence, moving beyond heuristic-based methods often used in social media analysis.
In addition to traditional single-task environments, where all agents aim to discover one universal truth, this work further extends social learning to multitask settings, where agents observe data arising from multiple state variables. Focusing on community-structured networks, we show how, under certain conditions, each cluster can discover its own truth. These results are important for applications involving, for example, spatially distributed sensors and social networks with diverse opinions.
The multitask setting allows us to elucidate how the presence of agents with malicious intent or malfunctioning agents (i.e., agents with the "wrong" true state compared to the rest of the network) can disturb the decision-making process by other nodes and generally slow down convergence to consensus. By leveraging the sequence of publicly exchanged beliefs, we present an algorithm to uncover the true state of each agent, thereby identifying the intent of each node in the graph. This capability is particularly important in distributed systems, where periodic network probing can ensure that all agents are learning correctly and accurately.
Overall, this research contributes to the field of decentralized learning by enhancing the interpretability and applicability of social learning algorithms. The proposed solutions demonstrate potential in various domains, including social media analysis, sensor networks, and distributed decision-making systems.
EPFL_TH10881.pdf
Main Document
openaccess
N/A
15.05 MB
Adobe PDF
427bb0dac9ef58c236d504e12562f562