Olivier V.
Olivier V.
If anyone interested, I successfully trained my model by installing requirements as follows: %pip install transformers ftfy bitsandbytes gradio natsort safetensors xformers torch==2.2.1 accelerate kaleido cohere openai tiktoken
The flux model not loading :  Another model that is loading normally : 
 Sorry for the late reply. It's not used at all, anyway.
Just after upgrading to latest dev version : ``` 2024-10-30 14:06:29,399 | sd | DEBUG | installer | Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg'] 2024-10-30 14:06:29,495 | sd...
Erf... new error : ``` 2024-10-30 16:13:43,141 | sd | DEBUG | model_flux | Load model: type=FLUX model="Diffusers\Disty0/FLUX.1-dev-qint4_tf-qint8_te" repo="Disty0/FLUX.1-dev-qint4_tf-qint8_te" unet="None" te="None" vae="None" quant=qint8 offload=model dtype=torch.bfloat16 2024-10-30 16:13:43,527 | sd |...
Actually I tried with nf4 flux model so it might be an issue with Disty0/FLUX.1-dev-qint4_tf-qint8_te. It was working before Flux was completely supported in SDNext and now, maybe I have...
@vladmandic I agree that something is wrong with a dependency but I don't think it is optimum.quanto directly. I think there is a combo of diffusers/torch/quanto on windows that currently...
It is my first contribution so forgive me if I missed something in the process : https://github.com/quarkusio/quarkus/pull/47642
You can also fix by changing the model_profile to profile in the line. Probably a refactoring mistake ? ````javascript _selectAPI(profile) { if (typeof profile === 'string' || profile instanceof String)...