docker-diffusers-api icon indicating copy to clipboard operation
docker-diffusers-api copied to clipboard

Cuda out of memory error.

Open redfoo22 opened this issue 2 years ago • 3 comments

I'm able to Post to the API in docker on my local machine. I get a 200 success after the Inpainting function is finished. Then on my fontend when i get the Data back it's returning this:

{"$error":{"code":"PIPELINE_ERROR","name":"RuntimeError","message":"CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 8.00 GiB total capacity; 7.16 GiB already allocated; 0 bytes free; 7.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation

I have a: Aleen Laptop RTX 3070 64 mb Ram

Thanks! -foo

redfoo22 avatar Feb 15 '23 03:02 redfoo22

Hey, welcome! Unfortunately it's as it says, you're out of GPU RAM. There is a way to get it to work on 8GB RAM but I haven't prioritized that until now (you're the first person to need it 😅). Makes things a lot slower but definitely great to be able to dev locally.

I'm about to get on an international flight, and have a few other higher priorities issues for as soon as I'm back, but I'll try get something out in the next week. Watch this space 😁

gadicc avatar Feb 15 '23 06:02 gadicc

Ok thanks... I'm the mean time I forked the repo and uploaded to Banana... I'm able to hit the api but it's re-downloading the models on every request not just the first time. I'll get hacking around. Have a safe flight! This stuff is exciting!

Oh and I like to local dev.. so what machine specs or brands do you recommend?

redfoo22 avatar Feb 15 '23 08:02 redfoo22

Thanks, enjoy! See the -build-download reoo if you haven't already. If I didn't already make it clear on the readme, there's more info on the forums.

gadicc avatar Feb 15 '23 08:02 gadicc