IndexError: index 29 is out of bounds for dimension 0 with size 29
Describe the bug
I have three problems because of the same reason.
- TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int' # upon completion increase step index by one self._step_index += 1 <---Error here
- IndexError: index 29 is out of bounds for dimension 0 with size 29 sigma_next = self.sigmas[self.step_index + 1] <--- Error here
- RuntimeError: Already borrowed if _truncation is not None: self._tokenizer.no_truncation() <--- Error here Example: https://github.com/huggingface/tokenizers/issues/537 The reason, as I understood, is threads. Do you know, how can I solve this problem?
Reproduction
from diffusers import (
FluxPipeline,
FlowMatchEulerDiscreteScheduler,
)
import torch
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")
seed = 42
height = 720
width = 1280
generator = torch.Generator(device="cuda").manual_seed(seed)
pipeline(
prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
guidance_scale=0.,
# num_inference_steps=10,
height=height,
width=width,
generator=generator,
max_sequence_length=256,
).images[0]
Logs
For example:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
response = self.full_dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/conda/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/app/main.py", line 29, in generate_image
image = imagegen.run(**data)
File "/app/image_generator.py", line 102, in run
return generate_image()
File "/app/image_generator.py", line 89, in generate_image
return self.pipeline(
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 734, in __call__
latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]
File "/opt/conda/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py", line 295, in step
sigma_next = self.sigmas[self.step_index + 1]
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
System Info
- 🤗 Diffusers version: 0.31.0.dev0
- Platform: Linux-5.4.0-171-generic-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.2.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.24.6
- Transformers version: 4.44.2
- Accelerate version: 0.34.0
- PEFT version: 0.12.0
- Bitsandbytes version: not installed
- Safetensors version: 0.4.4
- xFormers version: not installed
- Accelerator: NVIDIA RTX A6000, 46068 MiB
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
@yiyixuxu @sayakpaul @DN6
Hi, the code works ok with just plain diffusers. Your issue is related to your implementation and tokenizers which you already pointed to an issue in the correct repo.
There is not that much we can do here since we don't have access to your full code and also this goes beyond the scope of the help we can provide.
If you're able reproduce the error with just diffusers and provide a snippet of the code, please post it and we can help you.
@Anvarka are you able to run this plain diffusers script that you provided?
from diffusers import (
FluxPipeline,
FlowMatchEulerDiscreteScheduler,
)
import torch
pipeline = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16
).to("cuda")
seed = 42
height = 720
width = 1280
generator = torch.Generator(device="cuda").manual_seed(seed)
pipeline(
prompt=prompt + ", highly detailed, all is depicted as silhouettes, without words",
guidance_scale=0.,
# num_inference_steps=10,
height=height,
width=width,
generator=generator,
max_sequence_length=256,
).images[0]
I'm facing the same problem. It always occurs when 2 images are generated simultaneously on one video card.
@OlegRuban-ai can you please post the plain diffusers snippet that you're using and that produces the error? Also can you post your environment too?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Hi @Anvarka, could you respond to the comments above if the problem still persists? If it's fixed, can I close this?
I'm facing the same problem. It always occurs when 2 images are generated simultaneously on one video card.
Did you solve this problem? I'm facing the same problem, too.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.