miautoml
miautoml
@smartwell 手机端的速度是用 MACE/SNPE 在高通845上测得的单帧运行 benchmark 速度(不含init开销)。您是指 PC GPU 运行上 9.25ms,是不符合预期,觉得慢是吧?
@smartwell 应该是 depthwise+pointwise, 不能简单和 group conv 划等号。另外不同的设备需要设计不同的模型(也是本篇立意,是为 mobile GPU-aware),所以移动端快的不一定在 PC 端快,这其实是大家容易忽略的一点。
Thanks for asking. The evaluation command is just updated. For training, please refer our paper for more information.
@shengyuwoo Proper modification is required. Note `[MoGA_A|MoGA_B|MoGA_C]` means you should pick one model, either `MoGA_A` or the others. For the dataset structure, please follow the setting in [FairNAS ](https://github.com/xiaomi-automl/FairNAS).
@shengyuwoo It is zipped with the default macOS compress tool. I don't seem to reproduce your problem. Could you show me how do you unzip the files?
@shengyuwoo Note `MoGaA.pth.tar` should be used as it is, there is no need to untar it. So the right way is `--pretrained-path MoGaA.pth.tar`. Good luck!
Please refer our reply to your issue in the Scarlet repo.
@namcao000 Thanks for noticing, it was a mistake in the paper writing, corrected in the latest update. The code is correct.
@xxsgcjwddsg Thanks for your interest. The training code is based on MnasNet, it will be released upon the publication of our paper.
@666zz666 Skip connections are in the choices of each layer, including downsampling layer. Although they are not drawn in the final architecture.