lkewis
lkewis
Not sure if you already fixed this, but adding **-0.5* ** before the sampling in ProjectStep2 seems to have fixed it (same as ProjectStep1): `[numthreads(THREAD_X, THREAD_Y, THREAD_Z)] void ProjectStep2 (uint2...
Ok the mismatch was because width and height variables weren't being passed into the pipe, which needs to be modified to (GENERATE_DIVERSE) `with autocast(device): image = pipe( # Diffuse magic....
> I was also not able to paste my token, what did you change on line 154? Hey, on line 154 mine reads: `use_auth_token="randomtokenstringhere"`
It is quite easy to overfit the training and then it is really hard to stylise what you trained. If you didn't configure 'yellow hat' as a custom embedding token,...
I wrote about my own experiments here, but there are no 'winning formulas' yet, everyone is still testing this stuff https://www.reddit.com/r/StableDiffusion/comments/xia53p/textual_inversion_results_trained_on_my_3d/
> @lkewis I got Textual Inversion to work on M1 thanks to your guide, after fixing the `nan` M1 error. Have you done any more experiments? It'd be good to...
@Any-Winter-4079 Hey I'm really sorry but I've just noticed in that other thread you've taken my images and started distributing them as a training example exactly as I have done....
@Any-Winter-4079 Thank you much appreciated, no need to remove those amazing images you've documented, I was just going to reply with my Dreambooth results as comparison. It's actually very helpful...
@Any-Winter-4079 Just thought I'd feed something back that has been uncovered. The 'Dreambooth' implementations most people have been using are actually nothing more than Textual Inversion with the model unfrozen...