🚀 Feature: Implement ML paper for Personalised Federated Learning
🔖 Feature description
We need to support the paper Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer by Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, Gauri Joshi.
The presents an interesting approach to cluster the clients and then communicate the models with each other providing hyper-personalization. It also enables multi-architecture selection based on device heterogeneity.
- [ ] Implement neural network architectures for reproducing the experiments in
experimentsfolder - [ ] Implement the aggregation/FL strategy in
fl_strategies - [ ] create a config in configs folder
🎤 Pitch
Abstract of the paper:
Personalized federated learning (FL) aims to train model(s) that can perform well for individual clients that are highly data and system heterogeneous. Most work in personalized FL, however, assumes using the same model architecture at all clients and increases the communication cost by sending/receiving models. This may not be feasible for realistic scenarios of FL. In practice, clients have highly heterogeneous system-capabilities and limited communication resources. In our work, we propose a personalized FL framework, PerFed-CKT, where clients can use heterogeneous model architectures and do not directly communicate their model parameters. PerFed-CKT uses clustered co-distillation, where clients use logits to transfer their knowledge to other clients that have similar data-distributions. We theoretically show the convergence and generalization properties of PerFed-CKT and empirically show that PerFed-CKT achieves high test accuracy with several orders of magnitude lower communication cost compared to the state-of-the-art personalized FL schemes.
📖 Additional Content
No response
👀 Have you spent some time to check if this issue has been raised before?
- [X] I checked and didn't find similar issue
🏢 Have you read the Code of Conduct?
- [X] I have read the Code of Conduct