litgpt
litgpt copied to clipboard
finetune (lora) with LitData
Is it possible to finetune (lora) model with raw LitData, like one used in pretraining? Main reason is that I want to perform "lightweight" continued pretraining on longer sequences but with finetune. Unsloth supports this.
This way, I wouldn't convert model, so I can finetune (cont. pretrain) using Unsloth every time.