Global Group Fairness in Federated Learning via Function Tracking
We investigate group fairness regularizers in federated learning, aiming to
train a globally fair model in a distributed setting. Ensuring global fairness
in distributed training presents unique challenges, as fairness regularizers
typically involve probability metrics between distributions across all clients
and are not naturally separable by client. To address this, we introduce a
function-tracking scheme for the global fairness regularizer based on a Maximum
Mean Discrepancy (MMD), which incurs a small communication overhead. This
scheme seamlessly integrates into most federated learning algorithms while
preserving rigorous convergence guarantees, as demonstrated in the context of
FedAvg. Additionally, when enforcing differential privacy, the kernel-based MMD
regularization enables straightforward analysis through a change of kernel,
leveraging an intuitive interpretation of kernel convolution. Numerical
experiments confirm our theoretical insights.
2503.15163v1.pdf
Main Document
openaccess
CC BY
1.03 MB
Adobe PDF
12d2e5820055302ab61328e8495335d5