litgpt icon indicating copy to clipboard operation
litgpt copied to clipboard

finetune (lora) with LitData

Open mtasic85 opened this issue 1 year ago • 0 comments

Is it possible to finetune (lora) model with raw LitData, like one used in pretraining? Main reason is that I want to perform "lightweight" continued pretraining on longer sequences but with finetune. Unsloth supports this.

This way, I wouldn't convert model, so I can finetune (cont. pretrain) using Unsloth every time.

mtasic85 avatar Mar 13 '25 19:03 mtasic85