stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

CUDA out of memory

Open MrRaymondLee opened this issue 3 years ago • 83 comments

I'm receiving the following error but unsure how to proceed. This is the output of setting --n_samples 1!

RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

MrRaymondLee avatar Aug 19 '22 23:08 MrRaymondLee

You have too little VRAM. Use smaller resolutions or the optimized fork

https://github.com/basujindal/stable-diffusion

kybercore avatar Aug 20 '22 12:08 kybercore

or just use this https://huggingface.co/spaces/stabilityai/stable-diffusion

breadbrowser avatar Aug 23 '22 19:08 breadbrowser

PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

Then run the command with:

--n_samples 1

ayyar avatar Aug 24 '22 10:08 ayyar

@ayyar I believe garbage_collection_threshold is only available with PyTorch >= 1.12, whereas it is pinned at 1.11 in the conda environment of this project. I upgraded PyTorch to 1.12, but since I had still memory issues, I switched to https://github.com/basujindal/stable-diffusion, as suggested by @parsec501, which works at least with my RTX 2070 (8GB) on Ubuntu 20.04.

cjauvin avatar Aug 24 '22 22:08 cjauvin

@ayyar where is the configuration file that I can set this? I'm new to python environments.

ninjoala avatar Sep 06 '22 00:09 ninjoala

@ayyar where is the configuration file that I can set this? I'm new to python environments.

me too...

tuwonga avatar Sep 06 '22 09:09 tuwonga

I'm receiving the following error but unsure how to proceed. This is the output of setting --n_samples 1!

RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

try using --W 256 --H 256 as part of you prompt. the default image size is 512x512, which may be the reason why you are having this issue.

saadmuk123 avatar Sep 08 '22 20:09 saadmuk123

I'm receiving the following error but unsure how to proceed. This is the output of setting --n_samples 1!

RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

try using --W 256 --H 256 as part of you prompt. the default image size is 512x512, which may be the reason why you are having this issue.

Well, I use now basujindal optimizedSD and I can make 1280x832. Try it !

tuwonga avatar Sep 08 '22 23:09 tuwonga

I have this same issue and looking at my memory usage it's at 5GBs before I even try to generate any images at it just stays there. Anyway to get it to let go of that memory so I can use the program?

slymeasy avatar Sep 10 '22 12:09 slymeasy

I have this same issue and looking at my memory usage it's at 5GBs before I even try to generate any images at it just stays there. Anyway to get it to let go of that memory so I can use the program?

try basujindal optimizedSD

tuwonga avatar Sep 10 '22 13:09 tuwonga

try basujindal optimizedSD

Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.

slymeasy avatar Sep 11 '22 07:09 slymeasy

try basujindal optimizedSD

Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.

do you mean REALESRGAN?

tuwonga avatar Sep 11 '22 09:09 tuwonga

try basujindal optimizedSD

Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.

Please use this stable diffusion gui and you will solve your upscaling issue stable-diffusion-webgui

tuwonga avatar Sep 11 '22 09:09 tuwonga

Please use this stable diffusion gui and you will solve your upscaling issue stable-diffusion-webgui

Thanks, got the same issue with CUDA error, but that guy had a lot of helpful pointers in the trouble shooting section and I managed to get it working.

I was already using REALESRGAN. I was looking for the Latent Diffusion upscaler because it supposedly adds more details and "fixes" the picture. There are some images I get back from Stable Diffusion that just don't have enough detail or are too messed up to be any good. I this this option "SD Upscale" that might be what I was looking for.

slymeasy avatar Sep 11 '22 22:09 slymeasy

Actually I fixed just replacing attention.py and module.py with the ones from basujindal.

tuwonga avatar Sep 12 '22 08:09 tuwonga

Actually I fixed just replacing attention.py and module.py with the ones from basujindal.

Thanks a lot for that link. That version is probably better than the original one I was trying to install. I'm getting back a lot more usable images.

slymeasy avatar Sep 13 '22 05:09 slymeasy

PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

Then run the command with:

--n_samples 1

where Do I run this command?

ig-sachin avatar Oct 06 '22 13:10 ig-sachin

In a terminal, preferably inside the same environment you have your torch installed. You can run it as export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 or set instead of export if you are on windows

snknitin avatar Oct 11 '22 10:10 snknitin

Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.

set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

image

ETdoFresh avatar Oct 14 '22 00:10 ETdoFresh

Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.

set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

image

This gives me:

Unrecognized CachingAllocator option: garbage_collection_threshold

clankill3r avatar Oct 21 '22 13:10 clankill3r

Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all. set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 image

This gives me:

Unrecognized CachingAllocator option: garbage_collection_threshold

I am having the exact same problem as you trying to get the stable-diffusion webui version working on my windows machine

ruairimcmahon avatar Oct 22 '22 23:10 ruairimcmahon

Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all. set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 image

This gives me: Unrecognized CachingAllocator option: garbage_collection_threshold

I am having the exact same problem as you trying to get the stable-diffusion webui version working on my windows machine

image

ruairimcmahon avatar Oct 22 '22 23:10 ruairimcmahon

try to follow this one https://www.youtube.com/watch?v=OjOn0Q_U8cY it helped myself

funfunnypl avatar Oct 26 '22 18:10 funfunnypl

basujindal optimizedSD

is it safe?

hamdimina avatar Nov 11 '22 19:11 hamdimina

basujindal optimizedSD

is it safe?

yes but at the moment I suggest you Automatic1111 webgui

tuwonga avatar Nov 11 '22 20:11 tuwonga

Thanks a lot for your reply, but I want to ask you, I'm working on a training dataset using yolov7 algo, Cuda, PyTorch... and when I am running the training dataset it shows me this error

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 4.00 GiB total capacity; 2.49 GiB already allocated; 0 bytes free; 2.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Do you think Automatic1111 WebGUI will fix this problem?

On Fri, Nov 11, 2022 at 9:41 PM tuwonga @.***> wrote:

basujindal optimizedSD

is it safe?

yes but at the moment I suggest you Automatic1111 webgui

— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312175764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZKHTUD53XQRNB2DYP3WH2VORANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>

hamdimina avatar Nov 12 '22 11:11 hamdimina

how much NVRAM do you have on your GPU ? Make a clean installation of automatic1111, maybe will fix your issue.

tuwonga avatar Nov 12 '22 11:11 tuwonga

Excuse me how can I check the NVRAM

On Sat, Nov 12, 2022 at 12:43 PM tuwonga @.***> wrote:

how much NVRAM do you have on your GPU ? Make a clean installation of automatic1111, maybe will fix your issue.

— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312460999, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZLN25TH7ANJYJRKROLWH57FVANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>

hamdimina avatar Nov 12 '22 13:11 hamdimina

I mean your graphic card

tuwonga avatar Nov 12 '22 15:11 tuwonga

Okay I have NVIDIA GeForce GTX 1650 Dedicated GPU memory : 0,0/4,0 GB Dedicated Video memory: 128 MB Installed RAM : 12,0 GB (11,8 GB usable) Can I download the stable diffusion Automatic1111?

On Sat, 12 Nov 2022, 16:08 tuwonga, @.***> wrote:

I mean your graphic card

— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312502083, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZO3RIHQQVP32AMDDUDWH6XFNANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>

hamdimina avatar Nov 12 '22 15:11 hamdimina