Mixed Nash for Robust Federated Learning
We study robust federated learning (FL) within a game theoretic framework to alleviate the server vulnerabilities to even an informed adversary who can tailor training-time attacks (Fang et al., 2020; Xie et al., 2020a; Ozfatura et al., 2022; Rodríguez-Barroso et al., 2023). Specifically, we introduce RobustTailor, a simulation-based framework that prevents the adversary from being omniscient and derives its convergence guarantees. RobustTailor improves robustness to training-time attacks significantly with a minor trade-off of privacy. Empirical results under challenging attacks show that RobustTailor performs close to an upper bound with perfect knowledge of honest clients.
1780_Mixed_Nash_for_Robust_Fed.pdf
Main Document
openaccess
CC BY
7.83 MB
Adobe PDF
a82fce68f79bf30e9f21180d7c5fee56