LoRA icon indicating copy to clipboard operation
LoRA copied to clipboard

Questions about the experiments details

Open speedcell4 opened this issue 3 years ago • 2 comments

Hi, thanks for sharing the source code.

  1. In Table 2, are these reported numbers the results of the test split or the validation split?
  2. In Table 2, for the RoBbase (LoRA) on the RTE task, the reported result is 86.6, is this a typo? cause it is even much higher than the full-tuning results (delta = 7.9).

speedcell4 avatar Sep 14 '22 02:09 speedcell4

Thanks for your questions.

  1. I believe these are validation numbers since the test set is not public. That is also done by prior work.
  2. Nope, that's not a typo. You can verify it with our checkpoint :)

edwardjhu avatar Sep 14 '22 11:09 edwardjhu

Wow, thanks for your quick response. I got two more questions.

  1. If I understand correctly, in table 2, the numbers of BitFit were taken from the original paper. But actually, there are some numbers I can not find in the original paper. For example, you mentioned the RoBbase (BitFit) on MRPC task results in 92.7, but I think the original paper reported this number as 92.0 in their Table 2. Could you specify more details about this?
  2. Do you fine-tune the bias terms? cause, I understand you don't require gradients for the weight terms, but I did not see you turn off this for bias terms.

https://github.com/microsoft/LoRA/blob/33b953630763c6299d2349abc8f154a3951a7984/loralib/layers.py#L116

speedcell4 avatar Sep 15 '22 01:09 speedcell4

You are right. I can't remember where we got 92.7, and it should be 92.0.

Yes, the bias term is learnable here, even though it is not in the code used in our experiments. This seems to be a good idea in practice and has a minimal overhead. The checkpointing utility functions should take care of saving/loading biases. Please let me know if you encounter any issues :)

edwardjhu avatar Sep 28 '22 19:09 edwardjhu

Thanks~

speedcell4 avatar Sep 28 '22 23:09 speedcell4