stpg06
stpg06
> tuning a low-rank adaptation network applies to parts of the text encoder, and is much easier to train and apply at different strength levels. > > this requires pretty...
> trained weights have to be stored in fp32, which increases the vram consumption a lot more over inference time I have trained a lot of models on sd1.4 and...
Wow, that is high. It's too bad you can't do it in separate fine-tuning sessions. I guess then the text encoder wouldn't match the unet model, though. I accept that...