000269023 001__ 269023
000269023 005__ 20190819160851.0
000269023 037__ $$aCONF
000269023 245__ $$aFederated Generative Privacy
000269023 260__ $$c2019
000269023 269__ $$a2019
000269023 336__ $$aConference Papers
000269023 520__ $$aIn this paper, we propose FedGP, a framework for privacy-preserving data release in the federated learning setting. We use generative adversarial networks, generator components of which are trained by FedAvg algorithm, to draw privacy-preserving artificial data samples and empirically assess the risk of information disclosure. Our experiments show that FedGP is able to generate labelled data of high quality to successfully train and validate supervised models. Finally, we demonstrate that our approach significantly reduces vulnerability of such models to model inversion attacks.
000269023 700__ $$aTriastcyn, Aleksei
000269023 700__ $$aFaltings, Boi
000269023 773__ $$tProceedings of the IJCAI Workshop on Federated Machine Learning for User Privacy and Data Confidentiality (FML 2019)
000269023 8564_ $$uhttps://infoscience.epfl.ch/record/269023/files/fml_2019-1.pdf$$zNA$$s361266
000269023 909C0 $$xU10406$$0252184$$pLIA
000269023 909CO $$ooai:infoscience.epfl.ch:269023$$pIC$$pconf
000269023 91899 $$9LIA2019
000269023 973__ $$aEPFL
000269023 980__ $$aCONF