YangZhaohui
YangZhaohui
> Did the author use cutout while training? I didn't find cutout except parser. > just here: parser.add_argument('cutout', action='store_true', default='False').But it didn't appear in the following main code. https://github.com/quark0/darts/blob/master/cnn/utils.py#L72
> > > Did the author use cutout while training? I didn't find cutout except parser. > > > just here: parser.add_argument('cutout', action='store_true', default='False').But it didn't appear in the following...
@Margrate In my opinion, the model size and FLOPs are efficient, however, considering the inference time, non-serial architecture like DARTS takes much more time thus less efficient.
https://github.com/huawei-noah/vega The official automl pipeline.
Thank you for your interest in our project. This CARS project has been merged and improved to support Huawei's AutoML pipeline, therefore, the code will not be released independently. The...
https://github.com/huawei-noah/vega The official automl pipeline.
@csuwoshikunge Each individual in the next generation produced by the EA algorithm does not train from scratch. In contract, all the individuals in the generation share parameters and inherit trained...
I have observed this too, seems like some kind of skip residual? For my best knowledge, this is the first time seeing this kind of residual connections, very interesting, and...
@BaptisteNguyen Thanks for your attention. Please email me: [email protected]
Hello @Jonghoongwak , If you are meaning the paper SLB, you can send me an email. This repo is not the code for SLB. Best, zhaohui