Compute embedding distances with torch.cdist
20Go -> 16Go Ram use for some workloads, same speed (you don´t have to materialize intermediates with torch.cdist)
cc @patrickvonplaten, not what we discussed but this is an effective three liner
The documentation is not available anymore as the PR was closed or merged.
LGTM! thanks!
Hey @blefaudeux,
How to you use this feature I think it's only used in decoding if "force_not_quantize" is set to True no?
Hey @blefaudeux,
How to you use this feature I think it's only used in decoding if
"force_not_quantize"is set toTrueno?
It's in the superres path, not doing that just eats 4GB ram when decoding for nothing. It's very much not perfect though, I'm looking at better options, but better than not doing that :)
improves on https://github.com/huggingface/diffusers/issues/1434
cc @patil-suraj, if you're interested in high res superres
Thanks a lot for the PR, this looks good to me! Will run the slow tests and then merge.
Also for high resolution upscaling, I'm exploring another option in #1521, and it seems to work well.
thanks for the link ! for this PR I think it's always worth it because no tradeoff, it's just better than the previous three lines, but it's not enough to enable high res that's for sure ! No issues with borders when splitting the decode ?
Another option if the convs were depth wise would have been to compute them depth-first (Ã la "Rerformer" years ago), but that's probably not a reasonable option so I guess that splitting is as good as it gets ?