diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

πŸ€— Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

Results 1293 diffusers issues
Sort by recently updated
recently updated
newest added

This PR shows how we can integrate Heun's scheduler into our current framework **without** any changes to the pipelineis. We simply stretch the timesteps and sigmas => I honestly don't...

### Model/Pipeline/Scheduler description It would be really nice to have a pipeline that integrates well with k-diffusion so that all schedulers be used out of the box will all checkpoints...

stale

* support for predict_epsilon = False in DDIM sampler * changed timestep selection such that it is more uniform if number of sampling steps doesn't cleanly divide number of training...

**What API design would you like to have changed or added to the library? Why?** I see some of the [schedulers](https://github.com/huggingface/diffusers/blob/7bd50cabafc60bf45ebbe1957b125d3f4c758ba8/src/diffusers/schedulers/scheduling_lms_discrete.py#L34) had these parameters added to init ``` trained_betas: Optional[np.ndarray]...

stale

### Describe the bug Basically is you set the scheduler to EulerAncestralDiscreteScheduler and the custom pipeline to lpw_stable_diffusion you will get different images when you generate ### Reproduction Here is...

bug

Follow-up of https://github.com/huggingface/diffusers/pull/1357, and mimics Transformers https://github.com/huggingface/transformers/pull/20321/files#diff-82b93b530be62e40679876a764438660dedcd9cc9e33c2374ed21b14ebef5dba

Fixes #1056. Another option is to unconditionally use `torch.float32` in all platforms (both `int` and `float` are accepted as inputs), what do you think?

I'm subclassing StableDiffusionPipeline (because it seems like that's the intended way to make a DiffusionPipeline that is still able to take advantage of StableDiffusionPipeline's methods to enable attention slicing, decode...

- move the enable/disable call to being part of the base DiffusionPipeline (removes a bunch of duplicates) - make the call recursive across all the modules in the model graph,...

20Go -> 16Go Ram use for some workloads, same speed (you donΒ΄t have to materialize intermediates with torch.cdist)