Questions about the experiments details
Hi, thanks for sharing the source code.
- In Table 2, are these reported numbers the results of the test split or the validation split?
- In Table 2, for the RoBbase (LoRA) on the RTE task, the reported result is 86.6, is this a typo? cause it is even much higher than the full-tuning results (delta = 7.9).
Thanks for your questions.
- I believe these are validation numbers since the test set is not public. That is also done by prior work.
- Nope, that's not a typo. You can verify it with our checkpoint :)
Wow, thanks for your quick response. I got two more questions.
- If I understand correctly, in table 2, the numbers of
BitFitwere taken from the original paper. But actually, there are some numbers I can not find in the original paper. For example, you mentioned theRoBbase (BitFit) on MRPC taskresults in 92.7, but I think the original paper reported this number as 92.0 in their Table 2. Could you specify more details about this? - Do you fine-tune the bias terms? cause, I understand you don't require gradients for the weight terms, but I did not see you turn off this for bias terms.
https://github.com/microsoft/LoRA/blob/33b953630763c6299d2349abc8f154a3951a7984/loralib/layers.py#L116
You are right. I can't remember where we got 92.7, and it should be 92.0.
Yes, the bias term is learnable here, even though it is not in the code used in our experiments. This seems to be a good idea in practice and has a minimal overhead. The checkpointing utility functions should take care of saving/loading biases. Please let me know if you encounter any issues :)
Thanks~