Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. Multi-agent Learning with Privacy Guarantees
 
doctoral thesis

Multi-agent Learning with Privacy Guarantees

Rizk, Elsa  
2023

A multi-agent system consists of a collection of decision-making or learning agents subjected to streaming observations from some real-world phenomenon. The goal of the system is to solve some global learning or optimization problem in a distributed or decentralized manner, where agents collect data locally and interact either with their neighbors or with some central processor. In designing multi-agent systems, one normally formulates a global risk function, consisting of the aggregate of local risks, and then seeks to approximate its optimizer through localized interactions among neighboring agents. During this process, the agents will be required to share processed data and/or iterates with their neighbors. The issue of privacy then becomes critical in enabling the safe communication of information over edges linking the elements of the multi-agent network. There have been several works in the literature that enforce privacy by adding random noise sources on top of the shared information. Most of these works establish that the resulting architectures are differentially private, but they assume that the gradients of the risk functions are bounded. Unfortunately, this condition is rarely valid in practice and this fact is often overlooked in most studies. For example, even quadratic risk functions have unbounded gradients because these gradients will be affine functions of the unknown parameter. Moreover, most studies fail to recognise that their differentially private solutions and the added noise sources end up degrading the mean-square error performance of the learning algorithms from $O(\mu)$ down to $O(\mu^{-1})$, where $\mu$ is the small learning parameter. These are serious limitations that remain unaddressed in existing approaches to differential privacy in multi-agent systems. In this dissertation, we resolve these two issues. First, we do not assume bounded gradients for the risk functions. And yet, we are still able to establish that the multi-agent systems remain differentially private, albeit with high probability. We achieve this conclusion by showing that the noise sources should not be added in an ad-hoc manner, as is common in existing approaches, but rather that they should be constructed in a manner that is cognizant of the graph topology. Otherwise, the noises end up generating a magnifying effect that degrades performance. For this reason, we introduce a locally homomorphic noise construction and show that, under this process, the MSE performance of the multi-agent system will remain at $O(\mu)$ while being differentially private at the same time. This is a reassuring conclusion. We illustrate these results for the special case of federated learning architectures, but also extend them to more general distributed learning and optimisation strategies over decentralised architectures.

  • Files
  • Details
  • Metrics
Type
doctoral thesis
DOI
10.5075/epfl-thesis-9939
Author(s)
Rizk, Elsa  
Advisors
Sayed, Ali H.  
Jury

Prof. Martin Jaggi (président) ; Prof. Ali H. Sayed (directeur de thèse) ; Prof. Volkan Cevher, Prof. Vincenzo Matta, Prof. Abdelhak Zoubir (rapporteurs)

Date Issued

2023

Publisher

EPFL

Publisher place

Lausanne

Public defense year

2023-06-22

Thesis number

9939

Total of pages

148

Subjects

multi-agent system

•

distributed learning

•

decentralized learning

•

federated learning

•

diffusion learning

•

differential privacy.

EPFL units
ASL  
Faculty
STI  
School
IEM  
Doctoral School
EDIC  
Available on Infoscience
August 10, 2023
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/199671
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés