Artur Komenda
Artur Komenda
Hello, I trained the Adapter correctly using the RTX3060 with 12GB VRam and the StableLM-3B model. Inference uses about 7.5GB and in my opinion the card I use is the...
@martinb-bb Finetune_adapter is possible on this equipment, full training is probably not possible. For me, finetuning did not use swap, except for the card I have 64GB DDR Ram. What...
@martinb-bb 16GB is enough, i observed max system ram usage ~4.5GB, when finetune adapter ram usage increase to ~10GB. What parameter for training you use? What dataset? Memory usage also...
@usmanxia Change fabric precision `fabric = L.Fabric(accelerator="cuda", devices=1, precision="bf16-true")` to `fabric = L.Fabric(accelerator="cuda", devices=1, precision="16-mixed")`