diffusers
diffusers copied to clipboard
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Based on #1161. I've mostly taken and adapted Flax->PT code from [transformers](https://github.com/huggingface/transformers). My test scenario is currently this: ```python from diffusers import FlaxStableDiffusionPipeline, StableDiffusionPipeline from diffusers import AutoencoderKL, UNet2DConditionModel import...
We are thinking about how to best support methods that tweak the cross attention computation, such as hyper networks (where linear layers that map k-> k' and v-> v' are...
## Intro Community Pipelines are introduced in `diffusers==0.4.0` with the idea of allowing the community to quickly add, integrate, and share their custom pipelines on top of `diffusers`. You can...
Some recent changes in `transformers`: https://github.com/huggingface/transformers/pull/20602 and `accelerate`: https://github.com/huggingface/accelerate/pull/920 that force us to also align the behavior in `diffusers`. For more information also have a look at: https://discuss.pytorch.org/t/discrepancy-between-loading-models-with-meta-tensors-and-normal-load-from-state-dict/168295
to get_scheduler func, * add it to the train_dreambooth.py
@patrickvonplaten @patil-suraj @anton-l I wonder what your thoughts are on the arguments the artists are making on Artstation, Twitter and Instagram right now. The main point is that the artists...

Hey there first thex for the amazing work :) i manged to get Dreambooth runnding on my trust 2080ti and the training works i have results in the 400 and...
### Describe the bug Hello, I was testing [the conversion script for ckpts to diffusers](https://raw.githubusercontent.com/huggingface/diffusers/039958eae55ff0700cfb42a7e72739575ab341f1/scripts/convert_original_stable_diffusion_to_diffusers.py), when I realized all the images that generated by the converted models are only 256x256...
According to the xformer's documentation (https://facebookresearch.github.io/xformers/components/ops.html#xformers.ops.memory_efficient_attention ): > Input tensors must be in format [B, M, H, K], where B is the batch size, M the sequence length, H the...