edchengg

Results 20 comments of edchengg

> > > trying to generate with 4 rtx 3090: > > > ``` > > > fairseq-generate \ > > > bin \ > > > --batch-size 1 \...

> I have already used m2m_100 successfully last year. Now I tried to generate ro-en translation with the current fairseq version and the m2m100 models for 6 and 8 GPUs....

This is a bug @Narsil Using the `T5Tokenizer` and install `sentencepiece` library will fix this issue. DO NOT use `AutoTokenizer`.

@Narsil To be clear I understand there is no way to recover the original string. But the goal in this thread is to recover the whitespace before special tokens. And...

I also got F1 for trigger classification around 69 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1 ranges...

So (i, t_start, t_end, t_type_str, a_start, a_end, a_type_idx) should be (i, t_type_str, a_start, a_end, a_type_idx) instead

I also got F1 for trigger classification around 69 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got F1 ranges...

> > I also got F1 for trigger classification around 66 with BERT + linear classification layer. But this is way below results reported from paper (https://www.aclweb.org/anthology/P19-1522.pdf, https://www.aclweb.org/anthology/K19-1061.pdf). They got...

> Hi, > I read your code and found there are two problems that hinder the performance improvement. > First, as I know, previous papers use head words of entity...