Efficient-AI-Backbones icon indicating copy to clipboard operation
Efficient-AI-Backbones copied to clipboard

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.

Results 117 Efficient-AI-Backbones issues
Sort by recently updated
recently updated
newest added

您好,请问在batched_index_select()方法 ` idx_base = torch.arange(0, batch_size, device=idx.device).view(-1, 1, 1) * num_vertices_reduced idx = idx + idx_base#(2,4096,9) idx = idx.contiguous().view(-1)#73728` 中idx_base的作用的是什么?为什么要与idx相加? `feature = x.contiguous().view(batch_size * num_vertices_reduced, -1)[idx, :]` 中切片中用idx(一维张量)取值是进行了什么操作或者这是什么语法? 请求您的答疑,感激不尽

Hi, in the visualization part of VIG, I noticed that the neighbors are different for the same node in the 1st and the 12th block. Does this mean the adjacency...

Hi, thanks for sharing this impressive work. The paper mentioned two architectures, Isotropic one and pyramid one. I noticed that in the code, this is a reduce_ratios, and this reduce_ratios...

I tried to reproduce the results of ViG with the released weight files as seen below. However, what I obtained are top1: 79.70% (top5: 95.07%) for ViG-S. I tried multiple...

Hi, I wonder if I can use the pre-trained model to process larger images (e.g. 256x256 or 384x384). I tried changing `self.pos_embed` and `HW` and loading the pre-trained parameters of...

作者您好@[iamhankai](https://github.com/iamhankai),感谢您的工作! 我想问下您是否有尝试过在Imagenet-21K数据集上预训练ViG,效果如何呢? Hi author @[iamhankai](https://github.com/iamhankai), thank you for your work! I would like to ask if you have tried pre-training ViG on the Imagenet-21K dataset, how does it work?

Currently, I have to git clone all projects and keep the vit_pytorch delete all others...

作者您好,我看开源的代码中只有GhostX-RegNet 的代码,没有G-GhostNet的代码,请问G-GhostNet什么时候开源呢?还是说暂时不会开源?

Hi, thank you for uploading the code. In the paper it is mentioned that before passing the image to the model the image is passed through a function G to...

@[yehuitang](https://github.com/yehuitang) Hello, thank you for releasing your VITAUG code. In your paper, you said you trained following the DEIT. DEIT used self-distillation during training. But I do not find it...