Tablaski
Tablaski
Really ? that's great. May I asked what differences have you noticed training stuff with it ? How much CFG did you set ? Did it worked on distilled models...
@Ice-YY thank you, I didn't see that discussion on nyanko7's hugginface This is extremely interesting. Still, have you tried yourself ?
Ok so you've tried with guidance 1 using dedistilled for the moment ? Then did you generate images with it back with distilled ? I am very curious to know...
This very good news then, I get that training with de-distilled + guidance >= 1 improves prompt adherence when back with distilled models and that they're able to use Distilled...
Thanks for you guys previous answer, adding my 2cents here : I now always train my LoRas using nyanko7's de-distilled model and --guidance_scale = 4.0 No issue to report when...
I personally use guidance 4 and rank 32 systematically Maybe guidance 6 could be useful for dataset involving several concepts
Thanks for your insights. I'm surprised you didn't find substancial inferencing improvements when training with CFG > 1 I will keep on training with de-distilled CFG = 4 because I...