Fast Federated Unlearning in Skew Environments
Federated learning (FL) enables privacy-preserving model training but faces growing demands for unlearning-removing specific training data to enforce privacy rights and mitigate data poisoning. Existing unlearning methods, largely designed for centralized learning, often fail in FL due to its decentralized nature and costly retraining requirements. We propose Fast-FedUL, a novel unlearning framework for FL in skewed environments. Unlike conventional methods, Fast-FedUL eliminates retraining by directly reversing a target clients influence on the global model. Through rigorous analysis, we develop an efficient algorithm with theoretical guarantees, ensuring reliable and scalable federated unlearning.
main_is.pdf
main document
openaccess
N/A
740.04 KB
Adobe PDF
0f60cfa6e89230b453b2756a6ef474b6