nan when the input length is large
Hi
Thanks for your efforts folks! While I was testing the code on my own dataset, I found that when the length of the input is large (~4000), the loss becomes Nan from the first step: Epoch 0, Loss nan, LR 1.00e-05: 12%|█████
For the same dataset, when I truncate my input to something shorter, I start to see the loss. What is the problem?
I think there is an issue in the code if I am not mistaken. The padding should be on the left side:
[:, -args["context_length"]:] in collate_fn function.
I after I did this, the loss started to appear. Could you please confirm?
I think there is an issue in the code if I am not mistaken. The padding should be on the left side:
[:, -args["context_length"]:]incollate_fnfunction.I after I did this, the loss started to appear. Could you please confirm?
wouldn't doing this truncate it from the left side?
I think there is an issue in the code if I am not mistaken. The padding should be on the left side:
[:, -args["context_length"]:]incollate_fnfunction. I after I did this, the loss started to appear. Could you please confirm?wouldn't doing this truncate it from the left side?
sorry, i didn't get you. You mean my update will not truncate it from the left?
I think there is an issue in the code if I am not mistaken. The padding should be on the left side:
[:, -args["context_length"]:]incollate_fnfunction. I after I did this, the loss started to appear. Could you please confirm?wouldn't doing this truncate it from the left side?
sorry, i didn't get you. You mean my update will not truncate it from the left?
I mean, if you have a tensor like [1,2,3,4], doing so would truncate it from the left side to make [2,3,4]. This is equivalent to having a string such as ABCD, it will be truncated to BCD iiuc.
[:, -args["context_length"]:]
not sure but I don't think so. This will truncate the second dim (sequence length) only, to have a specific length.
This has been a persistent issue for me while trying to fine-tune a Llama model on analyst reports using bnb_dora. The above suggestion regarding changing the padding has not helped. I have tried reducing the --context_length arg to as low as 256 and input length of my training data as low as 1024 tokens, but still see "Loss nan". Truncating the input length any further is pointless, as only a very small number of reports are that short.
If anyone has found a workaround for this I would greatly appreciate the knowledge.