drdsgvo
drdsgvo
Have the same issue when trying to use AutoPipelineForText2Image from diffusers: #load model, works pipeline_text2image = AutoPipelineForText2Image.from_pretrained( model_local, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ).to("cuda") #generate, here comes the error image = pipeline_text2image(prompt="any...
Just an information adding to the above discussion. For RTX 4090 I found out that FLOPS promised = 82.58 TFLOPS, which I changed accordingly in model.py That resulted in an...
> I can not believe that implementing such simple and useful feature has been pending for over 7 months with no clear path forward! This is more like a simple...
Are there any updates on this very important issue? To not implement logits is not a valid solution. Anyone (including me) who needs logits will move from Ollama to a...
> I'm sorry that you don't like opaque iframes or k-anonymity, but I don't think your worries are anything new. You say "What if a hacker intrudes a terminal of...
I got the same error with transformers 4.40.1
I can confirm all of the above: After fixing the parameters issues the error with the tensor size mismatch appeared. The parameters issues seem to be explainable by a change...
Got the same as given in the first post here: "...terminate called after throwing an instance of 'ReadSocketException'" I used the sample call given on th startpage of this project:...
Thank you for your reply! > @drdsgvo you need to be sure that all devices have enought RAM. The root has 16 GB VRAM, the worker has 20 GB VRAM....