StableSwarmUI icon indicating copy to clipboard operation
StableSwarmUI copied to clipboard

RuntimeError: CUDA error: operation not supported

Open YaseGar opened this issue 1 year ago • 3 comments

Hi, I do have CUDA install and match the versions. I still can't get images after running I install cu121 first but it is not working, then I manually install the cu118 and ti still not working

platform: Windows 10 CUDA Version: 11.8

[Warning] ComfyUI-0 on port 7821 stderr: Traceback (most recent call last): [Warning] ComfyUI-0 on port 7821 stderr: RuntimeError: CUDA error: operation not supported [Info] No images were generated (all refused, or failed).

YaseGar avatar Mar 22 '24 08:03 YaseGar

  • Check debug logs for more info
  • check that, uh, you have hardware capable of running SD? ie nvidia GPU with still-maintained firmware
  • when in doubt if everything else seems to be right try a restart or a full reinstall

mcmonkey4eva avatar Mar 24 '24 10:03 mcmonkey4eva

  • Check debug logs for more info
  • check that, uh, you have hardware capable of running SD? ie nvidia GPU with still-maintained firmware
  • when in doubt if everything else seems to be right try a restart or a full reinstall

Hi, thanks for reply. The logs is already on the top. I have two V100 installed.

YaseGar avatar Mar 25 '24 01:03 YaseGar

What you posted isn't the Debug log, it's the Info log.

EDIT1: Nvidia docs are hard to decipher around non-consumer cards, but I believe V100 is the datacenter equivalent of the GTX 16xx series? I wouldn't be surprised by driver issues there, considering both the age and unusual nature of that half-generation.

EDIT2: Apparently Volta is a a "beta release of turing" (16xx/20xx), Volta is cuda cc7.0 while Turing is cc7.5, ie it's even more unusual than the already unusual generation was. Oof.

You may have to google around and find other people running ComfyUI or SD in general on V100s to see if they need any special considerations.

mcmonkey4eva avatar Mar 25 '24 12:03 mcmonkey4eva

I was in the same case with 2 Tesla M60 cards. I fixed the issue with this steps. 1- Go into the menu Server\Backends 2- Edit the running backend 3- add this in ExtraArgs: --disable-cuda-malloc

In my case with 8Gb NVRAM i need to add in ExtraArgs: --lowvram --force-fp16

additionnal informations: if you want to use in the same time multiple GPUs, you need to create additionnal Backends with the same ExtraArgs and just change the GPU_ID (must be 0, 1, 2, 3, ...)

Now i'm able to use the 2 GPUs of my two M60 graphics card for generate 4 images in the same time, one for each GPUs

allerias avatar Apr 04 '24 09:04 allerias