stablediffusion icon indicating copy to clipboard operation
stablediffusion copied to clipboard

Parameterization "v" Related Questions in LoRA Training

Open JFrankLee opened this issue 1 year ago • 0 comments

Hello,

I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:

Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA. When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise. Could you please provide some insights or explanations for these observations? Thank you for your assistance!

JFrankLee avatar Jun 27 '24 03:06 JFrankLee