diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

text_to_image multi-gpu not working

Open Sunflower54 opened this issue 1 year ago • 5 comments

We are training text_to_image on Google cloud platform, the jupyterlab instance has 2 GPUs (NVIDIA Tesla P100) with a total memory of 32GB (16GB each). I tried using accelerate for training the text_to_image model for multi_gpu support. But still getting out of memory error. Even with 32GB, I don't understand why its only taking 16GB memory

Command used: accelerate launch --multi_gpu train_text_to_image.py --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir=$DATASET_DIR --image_column="image" --caption_column="text" --output_dir=$OUTPUT_DIR --train_batch_size=2 --resolution=512 --gradient_accumulation_steps=5 --num_train_epochs=1000 --learning_rate=1e-06 --gradient_checkpointing --enable_xformers_memory_efficient_attention

rank1]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 114.00 MiB. GPU has a total capacity of 15.89 GiB of which 89.12 MiB is free. Including non-PyTorch memory, this process has 15.80 GiB memory in use. Of the allocated memory 15.35 GiB is allocated by PyTorch, and 71.31 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

image

Any help will be much appreciated. Thanks.

Sunflower54 avatar May 09 '24 05:05 Sunflower54

i'm not sure which model you're training with it, but it looks like you're running into the classic problem with DDP training, aka Distributed Data Parallel.

this style of multi-GPU training runs a single instance of the trainer on each GPU, and loads everything equivalently on both. this means when using 2x 16G GPUs you don't have access to 32G, but just 2x 16G.

what you're looking for to split across two GPUs is called FSDP, fully sharded data parallel, which effectively splits layers and has a high communication overhead between GPUs. this kind of thing benefits from nvlink a lot more and also isn't supported in the Diffusers example trainers, or really any publicly accessible diffusion training toolkit that i'm aware of.

bghira avatar May 09 '24 17:05 bghira

Hello, I am using stable-diffusion 2.1 as the model. FSDP is not supported in stable diffusion? Is there any alternate way to train the model?

Sunflower54 avatar May 10 '24 05:05 Sunflower54

https://github.com/pytorch/pytorch/issues/91165

FSDP isn't supported by pytorch in general

you need GPUs with more VRAM, and in my experience GCP is one of teh most expensive routes to do this.

bghira avatar May 10 '24 10:05 bghira

We have to use GCP in the office as there's no access to physical GPUs. Even with accelerate or --multi-gpu we can't run the pytorch models on GCP?

Sunflower54 avatar May 14 '24 05:05 Sunflower54

what i meant is the 16gb gpu through GCP is not as cost-effective as other platforms like Vast or RunPod where you can likely rent a single 48gb gpu for less than a dual 16gb instance on GCP

you can possibly get away with a low rank (LoRA) training on the two 16gb devices but as they lack intrinsic bf16 support (iirc) they are limited in utility

bghira avatar May 16 '24 17:05 bghira

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 14 '24 15:09 github-actions[bot]