jocastrocUnal
jocastrocUnal
Same problem here
I hope in the future this code could work... its more natural. ```python model = peft_model.merge_and_unload() model.save_pretrained("/model/trained") ```
I like that to :)
same here 
> I also have the same problem. The loss first decreases, then it slowly grows till it drops to zero. The image below shows the training loss in the first...
same here 
In the training arguments I set `resume_from_checkpoint = True`. But this is for huggingface. https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments
The same issue here. With the model "llama-3-8b-Instruct-bnb-4bit" in here. > I also just ran into this exact same issue. The model I am using is > > https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B >...