soten355

Results 12 comments of soten355

> @42lux currently our backend doesn't support reshape, but this feature is in our roadmap for further releases If I wanted to hard code into the backend my own dimensions,...

I had the same error just running the demo_web.py through a virtual environment. The problem was fixed with your suggestion `pip install streamlit_drawable_canvas` I agree, it should be part of...

The model should be stored in the cache created by the Hugging Face Pip. On my Mac, mine was located in: User/Me/.cache/huggingface/hub/models--bes-dev--stable-diffusion-v1-4-openvino As for increasing size, I believe it's not...

> > The clip-vit-large-patch14 (https://huggingface.co/openai/clip-vit-large-patch14) model used by SD can only handle sequences of 77 tokens. It works like that in the original pytorch implementation as well. Anything longer than...

This one doesn't, but Keras did (improving upon Divam's code): https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion/diffusion_model.py

I believe I got SD2.x 512 to work. Had to re-work the UNet model parameters and completely convert the CLIP encoder to OpenCLIP. In my repo, the user has the...

The guidance scale is any float number (for example 7.5, 10, 11.357) between 0 and 20. You'll plug that into the **unconditional_guidance_scale** variable The input strength for an input image...

You do need to specify the width and height, but there's no reason to do it again for every generation if nothing changes. I re-wrote the code on my repo...

Unfortunately the model needs to have an image size. If your computer can handle the memory use, you could create multiple classes that house pre-compiled models of different image dimensions,...

Just so I can keep following along, your goal is to make a TF Lite model so you can use a TPU? I looked up a TPU and is it...