Trainer compute_loss signature mismatch with newer transformers version
Current transformers version 4.46.1 def compute_loss signature changed causing issues when importing and using
from tevatron.retriever.trainer import TevatronTrainer as Trainer (The transformers code change is probably due to the recent fix w.r.t. gradient accumulation).
Changing the loss signature to
def compute_loss(self, model, inputs, return_outputs=False, num_items_in_batch=None):
in the trainer fixes the issue. This seems to be backward compatible to older transformer versions.
Thanks, I submitted a pull request #161 here, and I hope it can be merged asap.
Done
sorry for the late response. Thank you @liyongkang123 for fixing the issue.