enn-nafnlaus
enn-nafnlaus
I've tried all sorts of learning rates. With low learning rates nothing happens. With high learning rates, it evolves, but increasingly jumps to... not "crazy skewed" results, but "lower quality"...
Here's my training dataset. [training.tar.gz](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/9845228/training.tar.gz) Try to get something better than without normalization, using whatever learning rate suits you. My goal is quality over speed. For non-normalization use whatever you...
Just "[filewords]". The whole concept of filewords seems counterproductive to me. Why specify things that aren't actually descriptive of your training dataset? Boggles the mind.
Also, honestly, nothing I've gotten so far has been consistently good at all - I'm still hunting for something that works. Here's the image outputs of the first 8 seeds...
Okay, these are all much better than anything I've gotten, and at much lower steps, so I don't understand what's going on. Some questions. a) What do you mean "training...
Wait, you trained with Dreambooth? So, you weren't using AUTOMATIC1111? Your "create hypernetwork" tab certainly doesn't look like mine - I don't have a layers slider, I have a layers...
Oh... so I'm not even using the same feature as you...
Yeah, can people, like, come up with a way so that there's default settings that *actually work* for the given hypernetwork the user creates, with the default settings for what...
I'm working on 1,2,1 swish+norm+dropout. You do get progress if you start out at a really high rate, and you do indeed need to lower the rate with time. My...
So, I'm suspecting that I'm getting into _overtraining_, and dropout isn't solving the problem. That is, to say, it's getting so good at matching the training images, that it becomes...