Ifrah Maqsood
Ifrah Maqsood
@linjieli222 thank you for your response. Yes I'm referring to Table 3 results. Did you use **python3 main.py --config config/ban_vqa.json** for training **and later on, evaluated the three graph attentions...
Hi, Thanks for your response. For relations table, it was easy to create graph by picking the words from relation's table. Like `object relation and subject` how to extract the...
here are some more sentences it couldn't detect umbrella is blue and white. microwave oven in the kitchen. stainless steel oven red fire hydrant in snow bear with black eyes...
@apsdehal I am sorry I didn't understand. Full logs of which part?
@apsdehal Here are the logs (mmf) $ **CUDA_VISIBLE_DEVICES=0,1 mmf_predict config=projects/movie_mcan/configs/vqa2/defaults.yaml \model=movie_mcan \dataset=vqa2 \run_type=test checkpoint.resume_zoo=movie_mcan.grid.vqa2_vg \training.num_workers=2 \training.batch_size=32** 2021-04-17T21:16:00 | mmf.utils.configuration: Overriding option config to projects/movie_mcan/configs/vqa2/defaults.yaml 2021-04-17T21:16:00 | mmf.utils.configuration: Overriding option model...
@apsdehal @vedanuj @ytsheng any updates? or can you reproduce this issue at your end?
@apsdehal Thank you for your response. The issue is that I need to calculate the features myself. I provided the steps above that I have followed for X-152pp. If I...
I understand but due to resource limitations, I could not download the auto downloaded features by MMF. I'll try to make it possible to download them and will update you...
@apsdehal reporting back. I downloaded the features and extracted the file. I checked the file sizes of both downloaded and the features that I extracted myself. Yes, you are right...
Also, I cross-checked my features' size extracted from https://github.com/facebookresearch/grid-feats-vqa with their pre-extracted(given) features and our size of features are same. So I think I have been calculating the features right....