dmgkv79
dmgkv79
I have already used m2m_100 successfully last year. Now I tried to generate ro-en translation with the current fairseq version and the m2m100 models for 6 and 8 GPUs. However,...
@edchengg , thanks a lot for your reply! I was trying something with an own class built in a fork of fairseq, so for me it would have been better...
Yes, I also have the same question. If I want to train a parrot model for a different language, how is adequacy computed?
From the first glance it seems to be a model trained on an entailment task. Still, I'm not sure.