Tsing
Tsing
Hi, @bird-two. Thank you for your valuable comments! The problem you mentioned does exist when generating the pathological Non-IID setting with the unbalanced raw dataset. As the statistical heterogeneity in...
Hi, @qq648545022 I mentioned "By using the package [opacus v0.15](https://github.com/pytorch/opacus/releases/tag/v0.15.0)" in the README.md, please check it.
> I think this code can't reduce the number of parameters. The code does not implement the ideas in your paper. Is there a error with my understanding? @wardseptember After...
For question one (evaluate function): This is a platform for research, not for production currently. You can modify any of the code as you want. If testing on all the...
I also meet the same question as @cuicathy.
Instead of using `OLE.py`, I find that PyTorch can compute the nuclear norm by `loss = torch.norm(X, p='nuc')`.
是的,`-gr`的设置没有严格要求,如果算法能在1000轮内收敛,那么`-gr 1000`和`-gr 2000`最终结果是一样的
这里的”对于Cifar100数据集即使将客户端数量修改为1“指的是用dataset文件夹下的代码切分数据集的时候,就设置为1个客户机吗? 我们这边默认设置数据量:tran set: test set=3:1,而且是先将train set和test set混合后再切分为3:1的。原始数据集tran set: test set=5:1,相比而言,训练集数据量更多,更不容易过拟合。 对于模型训练而言,训练数据量至关重要,如果希望跟集中式训练对比,那么请修改相应代码,保证这方面的一致性。
很抱歉,超参数选择上,我这边给不了太多的建议。因为我在做研究的过程中,本着“尽可能不调参”、“尽可能在各种任务上只使用一套超参数”的理念,去设计联邦学习算法。
You can contribute to our project by submitting a pull request that adds the Extended Dirichlet strategy. We may add it when we have free time.