Feature_Critic
Feature_Critic copied to clipboard
Feature-Critic Networks for Heterogeneous Domain Generalisation
RuntimeError: The size of tensor a (128) must match the size of tensor b (3) at non-singleton dimension 3
Hi there, I have Python 3.8 with Pytorch 1.7.0. I am not able to run the model: it seems that the computational graph is not synchronized as I get an...
When I train the model using this code, the GPU memory increases up to out of memory. Do you have any solution about that? Thanks alot, looking forward to your...
https://github.com/liyiying/Feature_Critic/blob/417eca68168537b7edfe154e3c5a5f3bb1e8947c/utils.py#L24 Sorry, I don't understand the meaning of this function, I hope you could explain it. And in model_PACS.py line 387-392 ` temp_new_feature_extractor_network = alexnet(pretrained=False).cuda() fix_nn(temp_new_feature_extractor_network, theta_updated_new) temp_new_feature_extractor_network.train() temp_old_feature_extractor_network =...
Sorry for the inconvenience。 I trained and tested on the PACS dataset. The accuracy of the model trained by myself was 58%, and the accuracy of your trained model was...
Can this code apply to other pretrained networks? like resnet18, resnet50. Based on the line 361 in model_PACS, it seems it cannot apply on resnet18. I have tried resnet50, but...
Thanks for sharing the code. I have the imagenet2012 dataset, which is more than 100GB. How can I process it to 6.1GB preprocessed dataset. Looking forward to your reply~
Hi @liyiying. Thanks for your implementation! I have a question that: This implementation feeds batches of each meta train dataset into feature_extractor_network and sums up the losses (meta_train_loss_dg += loss_dg)....
Hi, @liyiying , Thanks for your implementation. I am a little confused with the method in the paper. Actually, your aim in the paper is the domain generalization. However, in...