The Interplay Between Explainability and Differential Privacy in Federated Healthcare
Federated Learning (FL) enables the training of deep learning models on siloed medical data. Its real-world application is often challenged by statistical heterogeneity, privacy requirements, and the need for model transparency. This paper addresses these challenges by investigating the interplay between FL, Differential Privacy (DP), and model explainability for 3D medical image segmentation. To simulate a realistic environment, we establish a cross-silo federation of four clients, comprising data from the BraTS dataset and a distinct heterogeneous dataset from a real hospital in Europe. Our analysis characterizes and quantifies an interaction, namely the phHeterogeneity Amplifier effect, providing a metric to measure the disproportionate degradation of explanation fidelity on heterogeneous clients under DP. To address this challenge, we propose Boundary-Interior Disentangled CAM (BID-CAM), a hybrid explanation method designed for DP-awareness. Our evaluation shows that BID-CAM maintains explanation fidelity under privacy constraints with respect to standard methods, demonstrating a more robust approach to model transparency in private, federated settings applied to medical imaging.
2-s2.0-105018302483
Organisation Européenne pour la Recherche Nucléaire
École Polytechnique Fédérale de Lausanne
Organisation Européenne pour la Recherche Nucléaire
Organisation Européenne pour la Recherche Nucléaire
Organisation Européenne pour la Recherche Nucléaire
Attikon University Hospital
Attikon University Hospital
Organisation Européenne pour la Recherche Nucléaire
Universitat Pompeu Fabra Barcelona
Organisation Européenne pour la Recherche Nucléaire
2026
978-3-032-05665-8
978-3-032-05663-4
Lecture Notes in Computer Science; 16135 LNCS
1611-3349
0302-9743
131
142
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
Daejeon, Korea, Republic of | 2025-09-23 - 2025-09-27 | ||