diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Abnormal size for LEDITS++ model ?

Open vdelale opened this issue 1 year ago • 1 comments

Describe the bug

I tried to run the diffusers pipeline for LEDITS++ with LEditsPPPipelineStableDiffusionXL but I encounter a Cuda Out of memory error, which I find abnormal, since the error states that it tried allocating other 136 GiB. Here are all the information needed

Reproduction

  • Created and activated virtual env python -m venv .leditspp_env && source .leditspp_env/bin/activate
  • Installed accelerate and transformers pip install accelerate transformers
  • Installed diffusers from source pip install git+https://github.com/huggingface/diffusers as mentionned on Hugging face installation - install from source. I needed to install it from source otherwise I had the error I reported in #7972
  • Ran the python script from Hugging face docs for LEDITS++:
import torch
import PIL
import requests
from io import BytesIO
from diffusers import LEditsPPPipelineStableDiffusionXL

pipe = LEditsPPPipelineStableDiffusionXL.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
)

pipe = pipe.to("cuda")

def download_image(url):
    response = requests.get(url)
    return PIL.Image.open(BytesIO(response.content)).convert("RGB")

img_url = "https://www.aiml.informatik.tu-darmstadt.de/people/mbrack/tennis.jpg"
image = download_image(img_url)

_ = pipe.invert(
    image = image,
    num_inversion_steps=50,
    skip=0.2
)

edited_image = pipe(
    editing_prompt=["tennis ball","tomato"],
    reverse_editing_direction=[True,False],
    edit_guidance_scale=[5.0,10.0],
    edit_threshold=[0.9,0.85],
).images[0]

Logs

$ python test_leditspp.py 
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00,  2.02it/s]
This pipeline only supports DDIMScheduler and DPMSolverMultistepScheduler. The scheduler has been changed to DPMSolverMultistepScheduler.
Your input images far exceed the default resolution of the underlying diffusion model. The output images may contain severe artifacts! Consider down-sampling the input using the `height` and `width` parameters
Traceback (most recent call last):
  File "/home/vdelale/code/test_leditspp.py", line 20, in <module>
    _ = pipe.invert(
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/pipelines/ledits_pp/pipeline_leditspp_stable_diffusion_xl.py", line 1576, in invert
    image_rec = self.vae.decode(
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
    return method(self, *args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 303, in decode
    decoded = self._decode(z, return_dict=False)[0]
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl.py", line 276, in _decode
    dec = self.decoder(z)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/autoencoders/vae.py", line 337, in forward
    sample = up_block(sample, latent_embeds)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 2750, in forward
    hidden_states = upsampler(hidden_states)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/diffusers/models/upsampling.py", line 180, in forward
    hidden_states = self.conv(hidden_states)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/vdelale/code/.leditspp_env/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.51 GiB. GPU  has a total capacity of 79.11 GiB of which 21.06 GiB is free. Process 33554 has 1.25 GiB memory in use. Process 3812073 has 2.32 GiB memory in use. Process 3812837 has 1.04 GiB memory in use. Including non-PyTorch memory, this process has 53.40 GiB memory in use. Of the allocated memory 48.46 GiB is allocated by PyTorch, and 4.21 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

System Info

OS: Ubuntu 22.04.3 LTS python version: Python 3.10.12 python packages:

  • diffusers==0.27.2
  • torch==2.3.0
  • transformers==4.40.2 H100 GPU (memory: 80 GiB approximately)

Who can help?

Maybe @yiyixuxu @sayakpaul or @DN6, I don't know to what extent LEditsPPPipelineStableDiffusionXL is linked to StableDiffusionXLPipeline

vdelale avatar May 23 '24 13:05 vdelale

Cc: @linoytsaban

sayakpaul avatar May 24 '24 10:05 sayakpaul

Hey @vdelale! I missed this bug - I think it's related to the size of the image, does it still error if you resize it? e.g. by adding image = image.resize((512,512))

linoytsaban avatar Jun 08 '24 16:06 linoytsaban

Sorry for the long wait, yes it worked. However, I encountered an other error, which was the same as mentionned in #7972. Curiously, the error did not occur at the first call of the generation, but only the subsequent ones. This time, I added some lines to the source code of diffusers - mainly in pipeline_leditspp_stable_diffusion_xl.py and some other scripts to cast the tensors to the right device and torch.dtype.

vdelale avatar Jun 18 '24 13:06 vdelale