Licheng Yu

Results 14 comments of Licheng Yu

Hi, the nn.LanguageModelCriterion is optimized by minimizing -logprobs/#total_number_wds within a batch. I would consider it as log_ppl. However, during beam search, we are choosing top K beams with highest logprobs:...

You are absolutely right here! But I think the ranking of done_beams need to consider logppls. What I did is add one more function called "compare_ppl" and I will calculate...

I believe the issue is "speechContexts" over "speechContext", please check your pip installed code again. :-)

Hi jshi31, you are right. The pytorch I used for this project was 0.2. I didn't test it with the latest version since then, so the easiest way to evaluate...

Thanks @jshi31 . Yeah, this may due to the issue of old python. I may use the keys of a dictionary to build some index. Older python may not maintain...

Yes, I merged several datasets and retrained the model for the demo use. So the output would be different from the released ones here.

Hi, the features are too big to share. I extracted image's C4 features first and saved them there. During training, I run ROI-pooling for each object given ground-truth or detected...

Perhaps downgrading the pytorch to the one (0.2) I used might be easier to solve all the issues :-) I will update my code to modern versions in near future.

Probably not with current contrastive ranking loss. You could try replacing it with binary cross-entropy, which could output a [0, 1) score. I would imagine an ambiguous expression would return...

Maybe you could check the keys in ref(xxx).p, or maybe take a look at the dataloader (https://github.com/lichengunc/refer/blob/master/refer.py)