diffusers
diffusers copied to clipboard
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
### Describe the bug When I use IP_adapter and hd_painter at the same time. it pop out the RuntimeError: mat1 and mat2 shapes cannot be multiplied (514x1280 and 1024x3072). It...
Hello folks! https://github.com/huggingface/diffusers/pull/7944/ introduced support for [Perturbed Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) which enhances image generation quality training-free. | Generated Image without PAG | Generated Image with PAG | |-----------------------------|--------------------------| | , is an efficient text-to-audio generation model. Compared to a comparable diffusion-based TTA model, ConsistencyTTA achieves...
# What does this PR do? To not have to incur: ```bash huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If...
# What does this PR do? Part of #8384 Test script ```python export MODEL_DIR="runwayml/stable-diffusion-v1-5" export OUTPUT_DIR="controlnet_output" accelerate launch train_controlnet.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --dataset_name=fusing/fill50k \ --resolution=512 \ --num_train_epochs=100 \...
### Describe the bug When implementing the PixArtAlphaPipeline, one step of inference was bound to the DMD, which is inappropriate. This resulted in errors in other one-step inference codes based...
this is comfy workflow  how can i do in diffusers colab workflow?
### Describe the bug In the attention implementation of SD3, attention masks currently are not used. This will result in inconsistent outputs for the different values `max_seq_length` where padding exists...
### Describe the bug Running CFGCutoffCallback with ControlNet SDXL will raise following error ```` diffusers/src/diffusers/models/attention.py:372, in BasicTransformerBlock.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, timestep, cross_attention_kwargs, class_labels, added_cond_kwargs) 364 norm_hidden_states = self.pos_embed(norm_hidden_states) 366...
### Describe the bug When invoking pipeline.load_lora_weights(path), an incremental memory increase is observed over multiple iterations, indicating a potential memory management issue. Although pipeline.unload_lora_weights() is called following each load operation,...