Richardson, Adam JulianFilos-Ratsikas, ArisFaltings, Boi2023-02-092023-02-092023-02-092022-12-0710.1007/978-3-030-63076-8_13https://infoscience.epfl.ch/handle/20.500.14299/194725We consider federated learning settings with independent, self-interested participants. As all contributions are made privately, participants may be tempted to free-ride and provide redundant or low-quality data while still enjoying the benefits of the FL model. In Federated Learning, this is especially harmful as low-quality data can degrade the quality of the FL model. Free-riding can be countered by giving incentives to participants to provide truthful data. While there are game-theoretic schemes for rewarding truthful data, they do not take into account redundancy of data with previous contributions. This creates arbitrage opportunities where participants can gain rewards for redundant data, and the federation may be forced to pay out more incentives than justified by the value of the FL model. We show how a scheme based on influence can both guarantee that the incentive budget is bounded in proportion to the value of the FL model, and that truthfully reporting data is the dominant strategy of the participants. We show that under reasonable conditions, this result holds even when the testing data is provided by participants.Federated learningData valuationIncentivesBudget-Bounded Incentives for Federated Learningtext::book/monograph::book part or chapter