LoRA icon indicating copy to clipboard operation
LoRA copied to clipboard

Support multi-lora fine tune in the same GPU

Open merlintang opened this issue 2 years ago • 0 comments

Dear All

We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU.

We are so glad to work with the community to make the LoRA with less GPU memory, you can check our contributions from this code repo: https://github.com/TUDB-Labs/multi-lora-fine-tune

PRs are welcome.

thanks

merlintang avatar Oct 09 '23 08:10 merlintang