Fix image upcasting
Thanks for the opportunity to fix #7854
What does this PR do?
This PR proposes to fix image upcasting before vae.encode() when using fp16 and vae.config.force_upcast==True with xformers or torch>=2.0 installed. Casting with .to(next(iter(self.vae.post_quant_conv.parameters())).dtype) is supposed to be preferred before vae.decode() not vae.encode().
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the contributor guideline?
- [X] Did you read our philosophy doc (important for complex PRs)?
- [x] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @yiyixuxu @kadirnar
the same issue exists in the StableDiffusionPipeline :) would you like to tackle that one too? latents must be cast to vae dtype during decode
I need to understand. Wouldn't this throw an error:
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16, variant='fp16').to("cuda")
image = pipe("a photo of an astronaut riding a horse on mars").images[0]
Or, is it a case in MPS?
it's actually something that occurs on ROCm which masquerades as CUDA
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Thanks for merging!
Thanks @standardAI ❤️