LoRA
LoRA copied to clipboard
Support multi-lora fine tune in the same GPU
Dear All
We are implementing a multi-lora framework to support fine tune llms with same base model in one GPU.
We are so glad to work with the community to make the LoRA with less GPU memory, you can check our contributions from this code repo: https://github.com/TUDB-Labs/multi-lora-fine-tune
PRs are welcome.
thanks