Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personalized Federated Learning
 
conference paper

FedSSA: Semantic Similarity-based Aggregation for Efficient Model-Heterogeneous Personalized Federated Learning

Yi, Liping
•
Yu, Han
•
Shi, Zhuan  
Show more
April 19, 2024
Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024)
33th International Joint Conference on Artificial Intelligence IJCAI 2024

Federated learning (FL) is a privacy-preserving collaboratively machine learning paradigm. Traditional FL requires all data owners (a.k.a. FL clients) to train the same local model. This design is not well-suited for scenarios involving data and/or system heterogeneity. Model-Heterogeneous Personalized FL (MHPFL) has emerged to address this challenge. Existing MH-PFL approaches often rely on a public dataset with the same nature as the learning task, or incur high computation and communication costs. To address these limitations, we propose the Federated Semantic Similarity Aggregation (FedSSA) approach for supervised classification tasks, which splits each client's model into a heterogeneous (structure-different) feature extractor and a homogeneous (structure-same) classification header. It performs local-to-global knowledge transfer via semantic similarity-based header parameter aggregation. In addition, global-to-local knowledge transfer is achieved via an adaptive parameter stabilization strategy which fuses the seen-class parameters of historical local headers with that of the latest global header for each client. FedSSA does not rely on public datasets, while only requiring partial header parameter transmission to save costs. Theoretical analysis proves the convergence of FedSSA. Extensive experiments present that FedSSA achieves up to 3.62% higher accuracy, 15.54 times higher communication efficiency, and 15.52 times higher computational efficiency compared to 7 state-of-the-art MHPFL baselines.

  • Files
  • Details
  • Metrics
Type
conference paper
ArXiv ID

2312.09006v3

Author(s)
Yi, Liping
Yu, Han
Shi, Zhuan  

EPFL

Wang, Gang  
Liu, Xiaoguang
Cui, Lizhen
Li, Xiaoxiao
Date Issued

2024-04-19

Publisher

IJCAI

Published in
Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024)
ISBN of the book

978-1-956792-04-1

Start page

5371

End page

5379

Subjects

Computer Science - Learning

•

Computer Science - Distributed; Parallel; and Cluster Computing

URL

Proceedings

https://www.ijcai.org/proceedings/2024/
Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIA  
Event nameEvent acronymEvent placeEvent date
33th International Joint Conference on Artificial Intelligence IJCAI 2024

IJCAI 2024

Jeju, Korea

2024-08-03 - 2024-08-09

Available on Infoscience
April 8, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/248831
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés