cog-stable-diffusion icon indicating copy to clipboard operation
cog-stable-diffusion copied to clipboard

Reducing inference timings for Sd2.1 Base model

Open pratos opened this issue 3 years ago • 1 comments

I managed to shave off inference timings for SD2.1 by a few seconds for 512x512 (50 steps) and 768x768 (50 Steps).

Using just few additions:

torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True

pipe = StableDiffusionPipeline.from_pretrained(
            MODEL_ID,
            cache_dir=MODEL_CACHE,
            local_files_only=True,
        )
pipe = pipe.to("cuda")

pipe.enable_xformers_memory_efficient_attention()
pipe.enable_vae_slicing()

Overall output didn't suffer coz of this. Getting crisp images. Wanted to know how do I create a PR to add these? And are there any tests around this?

Here are the inferences:

pratos avatar Dec 22 '22 23:12 pratos

Model in question: https://replicate.com/pratos/stable-diffusion-2-1-512

pratos avatar Dec 22 '22 23:12 pratos