litgpt icon indicating copy to clipboard operation
litgpt copied to clipboard

20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.

Results 460 litgpt issues
Sort by recently updated
recently updated
newest added

My understanding is currently the repo provides 4bit only for inference but not finetuning. If this is the case, is there a plan for integrating QLoRA-style 4bit finetuning?

This PR adds changes specific for finetuning on TPUs. Used TPU v4-8 with ``` log_interval = 1 devices = 4 batch_size = 64 / devices micro_batch_size = 4 gradient_accumulation_steps =...

```python import json import os import sys import time import warnings from pathlib import Path from typing import Optional import lightning as L import torch from generate import generate from...

enhancement
generation

Replaces DeepSpeed with FSDP Requires https://github.com/Lightning-AI/lightning/pull/17845 Closes #116 Closes #177 Closes #169 Falcon 7b takes 32 GB max memory allocated using 2 devices and 32-true or bf16-mixed precision. Loss is...

This is an attempt to implement **full** finetuning (as opposed to efficient finetuning via adapter etc.) A script for full finetuning (updating all layers). Todos - [x] Create finetune/full.py script...

In the finetuning scripts, we only allow ```python precision: Literal["bf16-true", "32-true"] = "bf16-true", ``` But we also use DeepSpeed when `devices > 1`. However, in this case, you'd get a...

Hi there, Firstly, I want to express my appreciation for the insightful tutorial and the fine-tuning repository. I've found them extremely useful. :rocket: I'm looking to clarify what the minimum...

- [ ] full #117 - [x] LoRA #128 - [x] Adapter #31

enhancement
fine-tuning