datar001

Results 12 comments of datar001

Thanks for your reply. Is it right about "the gradient accumulation and all tags in one iteration"? ![image](https://user-images.githubusercontent.com/62345916/136201403-e40941bf-15e0-4f6e-81ca-c2b6dc161aef.png) ![image](https://user-images.githubusercontent.com/62345916/136201490-e273fc29-5666-4ef8-b290-ce512ec76508.png) And '20k for 6 tags' is the typo? The official repo...

> Hi, please make sure you have successfully installed the following two libs: > > block==0.0.5 > block.bootstrap.pytorch==0.1.6

I have installed these libs: ![image](https://user-images.githubusercontent.com/62345916/121547843-15a7c900-ca3f-11eb-89b1-7be1e03d8e52.png) And I install all libs according to the requirements.txt.

The environment was achieved by the guidance in this repo. It seems to be right. python 3.6.8, torch1.6.0 I test this line in the python console and it also outputs...

I have solved this problem just now. It seems to be caused by pip resource. When I re-install block and block.bootstrap.pytorch from mirrors.aliyun.com/pypi/simple rather than pypi.tuna.tsinghua.edu.cn/simple, this code runs well.

请问有支持中文提示词的版本发布吗

I have the same question, Why is dim=0 not the dim =1? a better performance?

wow, maybe i am a blind man.... And, If the dim=-2 rather than -1 in Line 45 in networks/VQAModel/HGA.py, the performance will get a small improvement. This seems an implement...

I find that this problem does not hinder running codes. It can be able to train and evaluate the project. However, there are two additional minor errors in Line 51...