lovekdl
Results
2
issues of
lovekdl
I tried fine-tuning the llama-2-7b model using LoRa on an RTX3090 with 24GB, where the memory usage was only about 17GB. However, when I used the same configuration on an...
This can increase AQuA accuracy by 10%+