Generating Artificial Data for Private Deep Learning
In this paper, we propose generating artificial data that retain statistical properties of real data as the means of providing privacy for the original dataset. We use generative adversarial networks to draw privacy-preserving artificial data samples and derive an empirical method to assess the risk of information disclosure in a differential-privacy-like way. Our experiments show that we are able to generate labelled data of high quality and use it to successfully train and validate supervised models. Finally, we demonstrate that our approach significantly reduces vulnerability of such models to model inversion attacks.
1st_PAL_paper_7.pdf
openaccess
1.29 MB
Adobe PDF
5f5d465ab2f7b3e2d189d6cac21399a1