BannerGen icon indicating copy to clipboard operation
BannerGen copied to clipboard

Distribute Workload Across GPUs possible for Instructpix2pix ?

Open KeshavSingh29 opened this issue 2 years ago • 0 comments

Describe the bug Using 4 GPUs, each with 16GB memory. But running out of memory with Instructpix2pix model.

To Reproduce Using the script on 16gb GPU would give the following error:

InstructPix2Pix model loaded.
Loading background image from "test/data/example1/burning.jpg"...
InstructPix2Pix bbox generation...
Instructions:  Add diverse header texts saying \"The problem with burning\" in 24 characters covering 30% area. Add diverse body texts saying \"Exploring the science behind combustion.\" in 40 characters covering 30% area. Add diverse button texts saying \"LEARN ALL ABOUT IT\" in 18 characters covering 10% area.
  0%|                                                                                                                                                                                                                                                                         | 0/25 [00:00<?, ?it/s]
Traceback (most recent call last):

...
...
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.58 GiB. GPU 0 has a total capacty of 14.75 GiB of which 1.18 GiB is free. Including non-PyTorch memory, this process has 13.57 GiB memory in use. Of the allocated memory 13.21 GiB is allocated by PyTorch, and 228.72 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Expected behavior Normal inference

Note: Running on AWS. Other models work great, only this model runs OOM

KeshavSingh29 avatar Jan 30 '24 08:01 KeshavSingh29