peijunbao
peijunbao
My lower results seem dues to some bugs in my generation of preprocess/grounding_info. I generate these data again and achieves similar results in paper. Thank you.
The lower results seem to be some bugs when generating preprocess/grounding_info with training command. More specifically, When I generate preprocess/grounding_info with testing command, i.e. python -m src.experiment.eval \ --config pretrained_models/anet_LGI/config.yml...
Thank you. And I would check it. Are the config file provided by pretrained model and experiment\options same? i.e. pretrained_models\anet_LGI\config.yml experiment\options\anet\tgn_lgi\LGI.yml They seems in different writing style. But do they...
The Distinct Query Attention loss (dqa loss) works well to regularize the query attention. But I have a few question about its implementation. Assume that a query sentence has words...