Segmentation fault (core dump) in the TextToVideoZeroPipeline inference
Describe the bug
Segmentation fault (core dump) in the TextToVideoZeroPipeline inference
Reproduction
Following https://huggingface.co/docs/diffusers/v0.29.0/en/api/pipelines/text_to_video_zero#usage-example
import torch
from diffusers import TextToVideoZeroPipeline
model_id = "runwayml/stable-diffusion-v1-5"
pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
prompt = "A panda is playing guitar on times square"
print(f"prompt : {prompt}")
result = pipe(prompt=prompt).images
print("output :",result.shape)
result = [(r * 255).astype("uint8") for r in result]
print(result[0].shape,len(result))
import imageio
imageio.mimsave("video.mp4", result, fps=4)
Logs
When excuting
$ python t2v-zero.py
-----------------------
${anaconda_path}/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_validation.py:114: UserWarning: WARNING: failed to get cudart_version from onnxruntime build info.
warnings.warn("WARNING: failed to get cudart_version from onnxruntime build info.")
${anaconda_path}/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py:34: FutureWarning: `Transformer2DModelOutput` is deprecated and will be removed in version 1.0.0. Importing `Transformer2DModelOutput` from `diffusers.models.transformer_2d` is deprecated and this will be removed in a future version. Please use `from diffusers.models.modeling_outputs import Transformer2DModelOutput`, instead.
deprecate("Transformer2DModelOutput", "1.0.0", deprecation_message)
Loading pipeline components...: 100%|████████████| 7/7 [00:01<00:00, 4.11it/s]
prompt : A panda is playing guitar on times square
0%| | 0/2 [00:00<?, ?it/s]
Segmentation fault (core dumped)
System Info
- 🤗 Diffusers version: 0.29.0.dev0
- Platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35
- Running on a notebook?: No
- Running on Google Colab?: No
- Python version: 3.10.13
- PyTorch version (GPU?): 2.3.0+cu121 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.23.3
- Transformers version: 4.37.2
- Accelerate version: 0.30.1
- PEFT version: 0.11.1
- Bitsandbytes version: 0.43.1
- Safetensors version: 0.4.2
- xFormers version: 0.0.26.post1
- Accelerator: Quadro RTX 6000, 24576 MiB Quadro RTX 6000, 24576 MiB VRAM
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
Who can help?
@DN6
Could you try on a fresh new environment with different cuda versions of torch?
Hi @hyunW3 I'm unable to reproduce this using a T4 GPU. Are you running on a machine with low RAM?
No i'm working machine having 128GB DRAM and 24GB VRAM i tried on another environment (pytorch 2.2.0 of py3.9.18, cuda 11.8 and cudnn8.7.0). It works! Thank you for helping
I encountered the same error when using sdxl-turbo with torch 2.2.1+cu118, but error does not happen with torch 2.0.1+cu118, Could anyone tell me how to explain this?