Some questions about the CategoriesSampler
(1)in Trainer.py , line 87 train_sampler = CategoriesSampler(train_dataset.sub_indexes, len(train_loader), self.args.way+3, self.args.shot)
Why collect 3 more classes of samples??
(2) in lr.py line 88
lenth_per = torch.from_numpy(self.index[c])
way_per = len(lenth_per)
shot = torch.randperm(way_per)[:self.shot]
lenth_per is the list of sample indices for category C within the dataset. way_per is the total number of samples in category c. If you want to collect self.shot samples from category c, you should do lenth_per[:self.shot], or random.sample(lenth_per, self.shot), right? And the sample index obtained by the command shot = torch.randperm(num_data_in_one_class)[:self.shot] does not seem to be the true index of the sample。
What is the intention of this operation:c*num_data_in_one_class+shot ?
(3) In the training process of session0, train_loader is used to collect the query set needed for an episode, and the query data within each batch contains the data of all 60 base classes. Meanwhile, the train_fsl_loader is used to sample the data of the support set, and each episode collects three additional classes of data. I don't quite understand what is the connection between the program's episode sampling process and the process described by Random Episode Selection in the paper? Can you provide a more detailed explanation?