fikry
fikry
The **whole** imagenet dataset must be downloaded from [here](https://github.com/open-mmlab/mmfewshot/blob/main/tools/data/classification/mini-imagenet/README.md) if one want to use mini_imagenet. But the mini_imagenet datasets popular in FSL community are often divided in advance, e.g. [Optimization...
train_dataset: mini-imagenet train_dataset_args: {split: train} tval_dataset: mini-imagenet tval_dataset_args: {split: test} val_dataset: mini-imagenet val_dataset_args: {split: val}
When I tried to run the train_meta_baseline.py using "max_epoch=100", it always stopped at epoch 83... 【"tval .................... 0%"】 it seems that something is wrong with "tval"
LFTNet.py, "update model parameters according to model_loss" ` meta_grad = torch.autograd.grad(model_loss, self.split_model_parameters()[0], create_graph=True) for k, weight in enumerate(self.split_model_parameters()[0]): weight.fast = weight - self.model_optim.param_groups[0]['lr']*meta_grad[k] meta_grad = [g.detach() for g in meta_grad]`...
https://image-net.org/challenges/LSVRC/index.php I saw a lot of versions: 2010,2012, ...  Is it convenient to provide a link that can be downloaded directly?
It seems odd to use test features to eval. see https://github.com/gaopengcuhk/Tip-Adapter/blob/fcb06059457a3b74e44ddb0d5c96d2ea7e4c5957/main.py#L111 Could authors give some explanation?
Could you please give more comments about forward method in MSDN.py? too many variables
Could you please upload the classes split file for CUB dataset? (train/val/test)
when I debug 01_miniimagenet_stage2_SEGA_5W1S, I get the following result : 【traincode.py --->>> def train_stage2(opt): 】 Knovel_ids.size() **torch.Size([8, 5])** Kbase_ids.size() **torch.Size([8, 59])** logit_query.size() **torch.Size([8, 60, 64])** It seems 64 base classes...