NoMansPC

Results 15 comments of NoMansPC

Anybody getting this error: RuntimeError: "addmm_cuda" not implemented for 'Float8_e4m3fn' I'm using ComfyUI with the Krita plug-in. I'm not sure if that has any relation to this problem I experience...

> Hi, thanks for your question! > > You're correct that this project requires Python 3.11. I believe that if you use a virtual environment as directed in the Readme,...

Waiting for this as well. While the model will be very slow on my 4060ti 16gb, from what I've seen, it almost always gives me something I can use.

> Please use the ones from https://huggingface.co/Comfy-Org - they are also fp8. They're only bigger size because they include the vae + text encoder, which you'd need separately for the...

> Please use the ones from https://huggingface.co/Comfy-Org - they are also fp8. They're only bigger size because they include the vae + text encoder, which you'd need separately for the...

> > new post by Illyasviel NF4 model version of flux > > [lllyasviel/stable-diffusion-webui-forge#981](https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981) > > comfyanon mentioned on reddit that he didn't use/add nf4 due to loss of quality...

Same. I'm waiting for the plugin to support it too.

Same. There are supposed to be different fields for the VAE and the text encoder, but they don't show up for me.

> Has this been implemented yet? Not as far as I know, which is a shame.

> That means your backend hoists system messages. Enable strict prompt post-processing in API connections panel. I can't find it. ![Image](https://github.com/user-attachments/assets/4fe18d91-ca35-4f29-9692-98a56eb1ed09)