diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

IndexError: index 29 is out of bounds for dimension 0 with size 29

Open Anvarka opened this issue 1 year ago • 6 comments

Describe the bug

I have three problems because of the same reason.

  1. TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int' # upon completion increase step index by one self._step_index += 1 <---Error here
  2. IndexError: index 29 is out of bounds for dimension 0 with size 29 sigma_next = self.sigmas[self.step_index + 1] <--- Error here
  3. RuntimeError: Already borrowed if _truncation is not None: self._tokenizer.no_truncation() <--- Error here Example: https://github.com/huggingface/tokenizers/issues/537 The reason, as I understood, is threads. Do you know, how can I solve this problem?

Reproduction

from diffusers import (
    FluxPipeline,
    FlowMatchEulerDiscreteScheduler,
)
import torch

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")

seed = 42
height = 720
width = 1280

generator = torch.Generator(device="cuda").manual_seed(seed)

pipeline(
    prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
    guidance_scale=0.,
    # num_inference_steps=10,
    height=height,
    width=width,
    generator=generator,
    max_sequence_length=256,
).images[0]

Logs

For example:
 Traceback (most recent call last):
   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
     response = self.full_dispatch_request()
   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
     rv = self.handle_user_exception(e)
   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
     rv = self.dispatch_request()
   File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
     return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
   File "/app/main.py", line 29, in generate_image
     image = imagegen.run(**data)
   File "/app/image_generator.py", line 102, in run
     return generate_image()
   File "/app/image_generator.py", line 89, in generate_image
     return self.pipeline(
   File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
     return func(*args, **kwargs)
   File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 734, in __call__
     latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
  File "/opt/conda/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 295, in step
     sigma_next = self.sigmas[self.step_index + 1]
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'

System Info

  • 🤗 Diffusers version: 0.31.0.dev0
  • Platform: Linux-5.4.0-171-generic-x86_64-with-glibc2.35
  • Running on Google Colab?: No
  • Python version: 3.10.13
  • PyTorch version (GPU?): 2.2.1 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.24.6
  • Transformers version: 4.44.2
  • Accelerate version: 0.34.0
  • PEFT version: 0.12.0
  • Bitsandbytes version: not installed
  • Safetensors version: 0.4.4
  • xFormers version: not installed
  • Accelerator: NVIDIA RTX A6000, 46068 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@yiyixuxu @sayakpaul @DN6

Anvarka avatar Sep 04 '24 11:09 Anvarka

Hi, the code works ok with just plain diffusers. Your issue is related to your implementation and tokenizers which you already pointed to an issue in the correct repo.

There is not that much we can do here since we don't have access to your full code and also this goes beyond the scope of the help we can provide.

If you're able reproduce the error with just diffusers and provide a snippet of the code, please post it and we can help you.

asomoza avatar Sep 04 '24 12:09 asomoza

@Anvarka are you able to run this plain diffusers script that you provided?

from diffusers import (
    FluxPipeline,
    FlowMatchEulerDiscreteScheduler,
)
import torch

pipeline = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")

seed = 42
height = 720
width = 1280

generator = torch.Generator(device="cuda").manual_seed(seed)

pipeline(
    prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
    guidance_scale=0.,
    # num_inference_steps=10,
    height=height,
    width=width,
    generator=generator,
    max_sequence_length=256,
).images[0]

yiyixuxu avatar Sep 04 '24 17:09 yiyixuxu

I'm facing the same problem. It always occurs when 2 images are generated simultaneously on one video card.

OlegRuban-ai avatar Sep 06 '24 09:09 OlegRuban-ai

@OlegRuban-ai can you please post the plain diffusers snippet that you're using and that produces the error? Also can you post your environment too?

asomoza avatar Sep 06 '24 09:09 asomoza

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Oct 04 '24 15:10 github-actions[bot]

Hi @Anvarka, could you respond to the comments above if the problem still persists? If it's fixed, can I close this?

a-r-r-o-w avatar Oct 15 '24 21:10 a-r-r-o-w

I'm facing the same problem. It always occurs when 2 images are generated simultaneously on one video card.

Did you solve this problem? I'm facing the same problem, too.

EvanSong77 avatar Nov 01 '24 08:11 EvanSong77

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Nov 25 '24 15:11 github-actions[bot]