Files

Abstract

Federated Learning is explicitly designed for learning a global model from distributed, possibly sensitive non-i.i.d. data at different clients. In reality, this scenario is not always legible since in some cases the heterogeneous needs of clients cannot be packaged into a single global model, e.g. in next-word prediction on mobile phones where different clients might express themselves differently. Thus, in personalized federated learning the goal is to learn a personal model for each client individually by training a global model to be easy to adapt locally. Such a formulation is also known in meta-learning, where the goal is to learn a global model that can be easily adapted to different tasks rather than clients. In this project, we examine the differences and similarities between algorithms from the two fields. We furthermore develop a benchmarking dataset that captures different aspects of heterogeneity between the clients. On this we evaluate both when personalization in federated learning is actually beneficial over individual training and compare which algorithms from meta-learning and personalized federated learning fare best.

Details

PDF