CUDA out of memory
I'm receiving the following error but unsure how to proceed. This is the output of setting --n_samples 1!
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
You have too little VRAM. Use smaller resolutions or the optimized fork
https://github.com/basujindal/stable-diffusion
or just use this https://huggingface.co/spaces/stabilityai/stable-diffusion
PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
Then run the command with:
--n_samples 1
@ayyar I believe garbage_collection_threshold is only available with PyTorch >= 1.12, whereas it is pinned at 1.11 in the conda environment of this project. I upgraded PyTorch to 1.12, but since I had still memory issues, I switched to https://github.com/basujindal/stable-diffusion, as suggested by @parsec501, which works at least with my RTX 2070 (8GB) on Ubuntu 20.04.
@ayyar where is the configuration file that I can set this? I'm new to python environments.
@ayyar where is the configuration file that I can set this? I'm new to python environments.
me too...
I'm receiving the following error but unsure how to proceed. This is the output of setting
--n_samples 1!RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
try using --W 256 --H 256 as part of you prompt. the default image size is 512x512, which may be the reason why you are having this issue.
I'm receiving the following error but unsure how to proceed. This is the output of setting
--n_samples 1!RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 6.13 GiB already allocated; 0 bytes free; 6.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONFtry using --W 256 --H 256 as part of you prompt. the default image size is 512x512, which may be the reason why you are having this issue.
Well, I use now basujindal optimizedSD and I can make 1280x832. Try it !
I have this same issue and looking at my memory usage it's at 5GBs before I even try to generate any images at it just stays there. Anyway to get it to let go of that memory so I can use the program?
I have this same issue and looking at my memory usage it's at 5GBs before I even try to generate any images at it just stays there. Anyway to get it to let go of that memory so I can use the program?
try basujindal optimizedSD
try basujindal optimizedSD
Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.
try basujindal optimizedSD
Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.
do you mean REALESRGAN?
try basujindal optimizedSD
Do you have a link to a Latent Diffusion Upscaler that I can install on Windows? That's what I was trying to install using a Stable Diffusion webUI installation tutorial I found on the internet. I need that upscaler.
Please use this stable diffusion gui and you will solve your upscaling issue stable-diffusion-webgui
Please use this stable diffusion gui and you will solve your upscaling issue stable-diffusion-webgui
Thanks, got the same issue with CUDA error, but that guy had a lot of helpful pointers in the trouble shooting section and I managed to get it working.
I was already using REALESRGAN. I was looking for the Latent Diffusion upscaler because it supposedly adds more details and "fixes" the picture. There are some images I get back from Stable Diffusion that just don't have enough detail or are too messed up to be any good. I this this option "SD Upscale" that might be what I was looking for.
Actually I fixed just replacing attention.py and module.py with the ones from basujindal.
Actually I fixed just replacing attention.py and module.py with the ones from basujindal.
Thanks a lot for that link. That version is probably better than the original one I was trying to install. I'm getting back a lot more usable images.
PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
Then run the command with:
--n_samples 1
where Do I run this command?
In a terminal, preferably inside the same environment you have your torch installed. You can run it as export PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 or set instead of export if you are on windows
Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128
This gives me:
Unrecognized CachingAllocator option: garbage_collection_threshold
Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128This gives me:
Unrecognized CachingAllocator option: garbage_collection_threshold
I am having the exact same problem as you trying to get the stable-diffusion webui version working on my windows machine
Following @ayyar and @snknitin posts, I was using webui version of this, but yes, calling this before stable-diffusion allowed me to run a process that was previously erroring out due to memory allocation errors. Thank you all.
set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128This gives me: Unrecognized CachingAllocator option: garbage_collection_threshold
I am having the exact same problem as you trying to get the stable-diffusion webui version working on my windows machine

try to follow this one https://www.youtube.com/watch?v=OjOn0Q_U8cY it helped myself
basujindal optimizedSD
is it safe?
basujindal optimizedSD
is it safe?
yes but at the moment I suggest you Automatic1111 webgui
Thanks a lot for your reply, but I want to ask you, I'm working on a training dataset using yolov7 algo, Cuda, PyTorch... and when I am running the training dataset it shows me this error
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 4.00 GiB total capacity; 2.49 GiB already allocated; 0 bytes free; 2.54 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Do you think Automatic1111 WebGUI will fix this problem?
On Fri, Nov 11, 2022 at 9:41 PM tuwonga @.***> wrote:
basujindal optimizedSD
is it safe?
yes but at the moment I suggest you Automatic1111 webgui
— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312175764, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZKHTUD53XQRNB2DYP3WH2VORANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>
how much NVRAM do you have on your GPU ? Make a clean installation of automatic1111, maybe will fix your issue.
Excuse me how can I check the NVRAM
On Sat, Nov 12, 2022 at 12:43 PM tuwonga @.***> wrote:
how much NVRAM do you have on your GPU ? Make a clean installation of automatic1111, maybe will fix your issue.
— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312460999, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZLN25TH7ANJYJRKROLWH57FVANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>
I mean your graphic card
Okay I have NVIDIA GeForce GTX 1650 Dedicated GPU memory : 0,0/4,0 GB Dedicated Video memory: 128 MB Installed RAM : 12,0 GB (11,8 GB usable) Can I download the stable diffusion Automatic1111?
On Sat, 12 Nov 2022, 16:08 tuwonga, @.***> wrote:
I mean your graphic card
— Reply to this email directly, view it on GitHub https://github.com/CompVis/stable-diffusion/issues/39#issuecomment-1312502083, or unsubscribe https://github.com/notifications/unsubscribe-auth/AXKEAZO3RIHQQVP32AMDDUDWH6XFNANCNFSM57CHDTYA . You are receiving this because you commented.Message ID: @.***>