diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

Results 1293 diffusers issues
Sort by recently updated
recently updated
newest added

Add the missing get_velocity function (required for v-prediction) from DDPMScheduler from #2351 This allows DEISMultistepScheduler to be a drop in replacement for DDPMScheduler.

Just saw this https://github.com/cloneofsimo/paint-with-words-sd which is based on Nvidia's paper of paint with words. If it's not yet implemented, a community pipeline would be pretty cool. Essentially it's img2img, but...

community-examples
stale

I notice when I train a custom model with train_deambooth.py there are two different schedulers, a regular scheduler and a noise scheduler, but there is only one scheduler_config.json in my...

For multi-gpu training: `model = accelerator.prepare(model)` I want to save ckpt during training. Then I do this: `model = accelerator.unwrap_model(model)` `pipeline = Pipeline(model=model)` `pipeline.save_pretrained(...)` And I want to continue training....

stale

I'm trying to use the latest samplers which provide better performance with stable diffusion in the diffusers library, but I couldn't find DPM++ 2M Karras sampler in the library. These...

stale

I trained finetuned a stable diffusion model with my owndataset. The training scipt is like https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py The text_to_image pipleline output images with styles out of my expections. But if I...

stale

### Model/Pipeline/Scheduler description It would be nice to have a pipeline that combines the `StableDiffusionImageVariationPipeline` with the `StableDiffusionDepth2ImgPipeline`, i.e. a pipeline that creates an image from a depth map, but...

community-examples
stale

the `attention_head_dim` in `UNet2DConditionModel` seems to be passed down to `CrossAttnDownBlock2D` and `CrossAttnUpBlock2D` as the number of attention head, instead of the dimension of each attention head ```python from diffusers...

stale

I'm running the [Imagic Stable Diffusion community pipeline](https://github.com/huggingface/diffusers/blob/main/examples/community/imagic_stable_diffusion.py) and it's routinely allocating 25-38 GiB GPU vRAM which seems excessively high. @MarkRich any ideas on how to reduce memory usage? Xformers...

question
stale

**Is your feature request related to a problem? Please describe.** Currently UNet can be exported using `torch.jit.trace` but it's device dependent. **Describe the solution you'd like** It would be much...

stale