000269025 001__ 269025
000269025 005__ 20190828103948.0
000269025 037__ $$aCONF
000269025 245__ $$aGenerating Artificial Data for Private Deep Learning
000269025 260__ $$c2019
000269025 269__ $$a2019
000269025 336__ $$aConference Papers
000269025 520__ $$aIn this paper, we propose generating artificial data that retain statistical properties of real data as the means of providing privacy for the original dataset. We use generative adversarial networks to draw privacy-preserving artificial data samples and derive an empirical method to assess the risk of information disclosure in a differential-privacy-like way. Our experiments show that we are able to generate labelled data of high quality and use it to successfully train and validate supervised models. Finally, we demonstrate that our approach significantly reduces vulnerability of such models to model inversion attacks.
000269025 6531_ $$aml-ai
000269025 700__ $$aTriastcyn, Aleksei
000269025 700__ $$aFaltings, Boi
000269025 773__ $$tProceedings of the PAL: Privacy-Enhancing Artificial Intelligence and Language Technologies, AAAI Spring Symposium Series
000269025 8564_ $$uhttps://infoscience.epfl.ch/record/269025/files/1st_PAL_paper_7.pdf$$zNA$$s1348302
000269025 8560_ $$fsylvie.thomet@epfl.ch
000269025 909C0 $$xU10406$$0252184$$pLIA
000269025 909CO $$pconf$$pIC$$ooai:infoscience.epfl.ch:269025
000269025 91899 $$9LIA2019
000269025 973__ $$aEPFL$$rREVIEWED
000269025 980__ $$aCONF