GucciFlipFlops1917
GucciFlipFlops1917
I had the same issue for a range of different models. It was driving me crazy. The issue is the difference between the official python pip package for taming-transformers and...
Addendum: If any of the models say they can't locate logs/something/last.ckpt, just edit the corresponding yaml file to alter this to the location of where you have your checkpoint file...
@aliozcan That command isn't for the checkpoints files. If you would like to run the faceshq set for instance, you would use `python generate.py -p --vqgan_config checkpoints/faceshq.yaml --vqgan_checkpoint checkpoints/faceshq.ckpt `...
@limiteinductive thanks for saving me the time trying to test out all the other clip models. Any idea if it's easy to implement the sample code with the older VIT-B/32...
I unfortunately can't share pictures as I don't want to post my face, but reconstructed images are fine. The issue is the samples and samples_scaled. Both indeed present 2-3 people...
Thanks for checking. I will be fine for now :] Truly it's a matter of waiting for optimizations to roll out at this stage.
To add on from my experience, it's a balancing act between supplying variation and receiving coherent reconstructions. That's between the aforementioned element of camera angle and background as well as...
> size: 448 working on 3060 12gb Max memory usage and batch size?
I think the lstein repo has a more readily interconvertible output set for textual inversion, not requiring that line you cited.
The Glid-3-xl repo has such an implementation with the older latent-diffusion dataset: https://github.com/Jack000/glid-3-xl Only issue is that it uses a model specific for in painting. May still posit some ideas...