XLM-R Hyperparameters
Hi @JunjieHu!
I am trying to reproduce results for XLM-R. The paper suggests that lr=3e-5 and effective bs=16 should be used for XLM-R. It would be very helpful if you could share some more details on hyperparameters (code in repo is configured only for mBert).
Specifically:
- Are hparams the same for all the datasets?
- Are they also the same for XLM and XLM-R-large?
- Did you do warmup updates or/and learning rate scheduling?
- Any other non-default specific decisions that you made that could impact training (dropout/grad clipping)?
Thanks in advance for your answers and thanks for this resource.
Hi @maksym-del
Here are my answers to your questions.
- Mostly we used the hyper-parameters provided in the scripts/*.sh. For tasks except the QA parts, I used lr=2e-5, effective batch_size=32, and max epoch=5, 10 or 20 depending on the size of the English training data. I did try to search among [2e-6, 2e-5, 3e-5, 5e-5], and 2e-5 generally worked well.
- The same hparameters in the bash scripts are used for XLM/XLM-R-large/mBERT for most tasks. For the QA tasks, Sebastian might tune a little bit on different parameters.
- For the fine-tuning, we didn't do warmup and learning rate scheduling in this work. In the other work, I tried Polynormials warmup w/ AdamW optimizer for the XNLI task, and it worked better than constant learning rate w/o warmup.
- In my experience, the learning rate and optimizer could impact the performance greatly. In some cases, a smaller learning rate works better. As for optimizer, LAMB and SGD are slightly better than Adam, although we used Adam in this work without introducing more complexity to the code.
Hi @JunjieHu
thanks for the clarification!
Regarding answer 3, I can see from the config file that you are indeed not using warmup steps, but from the code, it looks like a linear LR decay is still applied (just w/o warmup).
Please correct me if I'm wrong.