Huynh, Thanh Trung2025-02-132025-02-132025-02-122025-02-10https://infoscience.epfl.ch/handle/20.500.14299/246902Federated learning (FL) enables privacy-preserving model training but faces growing demands for unlearning-removing specific training data to enforce privacy rights and mitigate data poisoning. Existing unlearning methods, largely designed for centralized learning, often fail in FL due to its decentralized nature and costly retraining requirements. We propose Fast-FedUL, a novel unlearning framework for FL in skewed environments. Unlike conventional methods, Fast-FedUL eliminates retraining by directly reversing a target clients influence on the global model. Through rigorous analysis, we develop an efficient algorithm with theoretical guarantees, ensuring reliable and scalable federated unlearning.enMachine UnlearningFederated LearningSkew ResilienceFast Federated Unlearning in Skew Environmentstext::report::technical report