multiple gpu parallel for train dreambooth without cuda out memory
I have 2 gpus and I would like to use both to train dreambooth without cuda out memory
They say that I should use nn.DataParallel , but I don't know where to put it
@loboere are you referring to this pytorch documentation?
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
I am also curious 🤔 @loboere please try https://github.com/huggingface/accelerate
pip install accelerate
accelerate config
I strongly advise against using nn.DataParallel, even PyTorch doesn't recommend it using anymore. Instead one should use https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel instead
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.