Dr. Jusseaux

Results 58 comments of Dr. Jusseaux

Hey!! Thanks for your answer! Well yes, I think the finetuning ran on that same GPU because I think I saw in the console that the finetuning was run on...

Oh yeah there's really no way to resume the finetuning on 12GB VRAM (aka save the optimiser states if I understand correctly)? It's okay let me just try that for...

I suppose I have to first delete the line in finetune.py, THEN retrain my model, and then try to call it through inference.py? Is that right? (Sorry I'm still a...

Ok this would then make the inference tap into the CPU to compensate for the lack of VRAM is that it? And if I retrain my model without the optimizer...

OK! let me try both solutions and get back to you later today :)

Hey again! So unfortunately I still have the OOM problem :( here's what I tried : - I added the map_location="cpu" line to "fast_inference_utils.py", wrote the path to my _former_...

No worries, thank you for your implication :)))

Hey! thank you! Of course, I've sent you by email through an external host because it's 5GB ^^ Tell me if you've received! The GPU I have is a RTX...

Hey @vatsalaggarwal any news on this question? Have you received my checkpoint? Thank you! I'm such in a hurry to start finetuning and try my hand at making a French...

Would be interested for French LoRA here :)