xudou3
xudou3
> Uh silly question but @danielhanchen are you sure that Unsloth isn't already supporting full finetuning by default? > > If you delete the `model = FastLanguageModel.get_peft_model(...)` step from one...
Hi, is there any update for it?
> Do you know what version of TRL you are using? trl 0.14.0 after update trl to 0.15.2 it can work, but it seems that model output is incorrect. Looks...
> The issue happens to be within the python version you are using. If you use python 3.11 it will work. But it is possible that you will have an...
> I encountered the same issue as you did. I checked all the installation versions on the official Colab and ensured that they were consistent, but the problem still persisted....
> after update trl==0.15.2 and set use_vllm=True, everything looks good so far. Thank you! but i found a new problem, the batch size seems not working, i can only run...
> > > after update trl==0.15.2 and set use_vllm=True, everything looks good so far. Thank you! > > > > > > but i found a new problem, the batch...
> [@xudou3](https://github.com/xudou3) [@kings-crown](https://github.com/kings-crown) [@StarLight1212](https://github.com/StarLight1212) Apologies just fixed the gibberish output! For Colab / Kaggle, please restart and run all. For local machines, please do: > > ``` > pip install...
> The issue is that `MistralForCausalLM_fast_forward` always returns logits instead of hidden states. The fix is on the way [#1831](https://github.com/unslothai/unsloth/pull/1831) I still have this problem while training Llama-3.2-1B-Instruct at commit...