Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees
 
Loading...
Thumbnail Image
conference paper

ProGAP: Progressive Graph Neural Networks with Differential Privacy Guarantees

Sajadmanesh, Sina
•
Gatica-Perez, Daniel  
January 1, 2024
Proceedings Of The 17Th Acm International Conference On Web Search And Data Mining, Wsdm 2024
17th ACM International Conference on Web Search and Data Mining (WSDM)

Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns as graph data can contain personal or sensitive information. Differentially private GNN models have been recently proposed to preserve privacy while still allowing for effective learning over graph-structured datasets. However, achieving an ideal balance between accuracy and privacy in GNNs remains challenging due to the intrinsic structural connectivity of graphs. In this paper, we propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs. Combined with the aggregation perturbation technique to ensure differential privacy, ProGAP splits a GNN into a sequence of overlapping submodels that are trained progressively, expanding from the first submodel to the complete model. Specifically, each submodel is trained over the privately aggregated node embeddings learned and cached by the previous submodels, leading to an increased expressive power compared to previous approaches while limiting the incurred privacy costs. We formally prove that ProGAP ensures edge-level and node-level privacy guarantees for both training and inference stages, and evaluate its performance on benchmark graph datasets. Experimental results demonstrate that ProGAP can achieve up to 5-10% higher accuracy than existing state-of-the-art differentially private GNNs. Our code is available at https://github.com/sisaman/ProGAP.

  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3616855.3635761
Web of Science ID

WOS:001182230100068

Author(s)
Sajadmanesh, Sina
•
Gatica-Perez, Daniel  
Corporate authors
Assoc computing machinery
Date Issued

2024-01-01

Publisher

Assoc Computing Machinery

Publisher place

New York

Journal
Proceedings Of The 17Th Acm International Conference On Web Search And Data Mining, Wsdm 2024
ISBN of the book

979-8-4007-0371-3

Start page

596

End page

605

Subjects

Technology

•

Graph Neural Network

•

Differential Privacy

•

Progressive Learning

Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIDIAP  
Event nameEvent placeEvent date
17th ACM International Conference on Web Search and Data Mining (WSDM)

Merida, MEXICO

MAR 04-08, 2024

FunderGrant Number

European Commission's H2020 Program, AI4Media Project

ICT-48-2020

European Commission's H2020WeNet Project

823783

Available on Infoscience
May 1, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/207608
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés