ANDA

Results 6 comments of ANDA

The released train codes are **not** for training super-net, right? I find it is hard to train super-net without detail setting. Could you please release the **codes or configs** for...

I have add codes to support to apply HPO methods (e.g. SHA) to wrap FTS. Now we can use sha to wrap fts by running `python federatedscope/hpo.py --cfg scripts/example_configs/femnist/sha_wrap_fts.yaml`. If...

I run a experiment on femnist to compare fts vs local bo. The search space is lr: [0.005, o0.5], weight decay: [0.0, 1.0], dropout: [0.0, 0.5]. There are 5 total...

The same issue. I only got 73.6% mIoU by using released model for 1024*512 resolution. I dont know whether it was caused by model or the resolution I used.

> For other people who may have visualization problems, I used the exact same code that I previously mentioned. However, I changed the training parameters, and the problem is solved....

@nicolefinnie I think the reason for that is that "WeightNorm" is applied in the latest code, so the class-wise learnable norms can actually play the role of "factors".