Multiple GPU problems?
I use U2Dmodel with nn.DataParallel if torch.cuda.device_count() > 1: model = nn.DataParallel(model)
Then I try to get the output. model_output = model(noisy_images, timesteps)
The model _output is UNet2DOutput(sample=<generator object gather.
model _output.sample is <generator object gather.
Since model_output file is a generator object that returns an iterable rather than the tensors. In order to retrieve all the outputs:
for i, out in enumerate(model_output.sample): print("output on gpu {}:".format(i), out)
In order to retrieve first element from model_output:
output = next(model_output.sample)
I hope this helps. If you have other queries, please respond.
I think the problem is nothing inside the generator.
If I do next to that I get
File "/python3.10/site-packages/torch/nn/parallel/scatter_gather.py", line 69, in
Maybe I should try nn.parallel.DistributedDataParallel
Yeah maybe you should try:
model = nn.parallel.DistributedDataParallel(model)
to parallelize the forward pass of a model across multiple GPUs.
Please note nn.DataParallel is (never recommended to be used. If you look at the official PyTorch docs: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html there is an array of warnings with the first one saying very clearly to use https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel instead.
We don't support nn.DataParallel in diffusers as we don't see any advantage of using it.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.