Federated-Learning-in-PyTorch icon indicating copy to clipboard operation
Federated-Learning-in-PyTorch copied to clipboard

The accuracy on Cifar10 may be low.

Open ADAM0064 opened this issue 3 years ago • 1 comments

I ran the FedAvg code with the CNN2 given in model.py, with regard to the Cifar10 dataset. I also excluded the model initialization in server.py, and all of the clients (only 10) were set to update and upload their models to the server. However, over about 100 rounds, the accuracy can only raise up to around 70%, and do not go up afterwards. I wonder if there is anything I've missed or mistaken. Could anyone please offer me some advice?

ADAM0064 avatar Jul 24 '22 13:07 ADAM0064

Hello, I am also trying to train cifar10, but my training accuracy rate is very slow when LR = 0.001 and I have tried different learning rates for many times, and sometimes there is no change. I don't know where the problem is. Can you give me some suggestions or parameters to share? Thank you

Weixiang-Han avatar Aug 27 '22 04:08 Weixiang-Han

This CNN model may not be the best for this dataset, you may use other CNN model (such as VGG) to get better performance

otouat avatar Nov 30 '22 09:11 otouat

Sorry for super late reply. In order for reproducing the best performance on CIFAR10 dataset reported in the original paper (McMahan et al., 2016), you should modify the hyperparmeter settings.

According to Figure 4 and CIFAR experiments section in the paper, with 100 clients, you need E=5, B=50, lr >=0.05, at least R > 500 and learning rate decay of 0.99 per round. Please kindly check the original paper and conduct the experiment again.

FYI, I wrote paper on the personalized federated learning, called SuPerFed (published in ACM 28th SIGKDD 2022 conference). You may find more refined implementation of FedAvg, FedProx, etc in the above repo. Thank you.

vaseline555 avatar Dec 16 '22 04:12 vaseline555