Ask-Anything
Ask-Anything copied to clipboard
ValueError: Attempting to unscale FP16 gradients when fully finetunning on LLM without LoRA?
dear all: when i disable the lora and set whole Mistral LLm weights requires_grad=True, it gives me the error:
ValueError: Attempting to unscale FP16 gradients
anyone give some tips?
I meet the same problem
have you solved it?