diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Controlnet: return zero tensors if scale is zero

Open cross-attention opened this issue 1 year ago • 7 comments

What does this PR do?

Return zero tensors when the scale for ControlNet is set to zero.

Example use case: Consider a pipeline that incorporates multiple ControlNets (e.g., 2) and aims to generate images using the same pipeline but with variations: a) using only the 1st ControlNet; b) using only the 2nd ControlNet; c) using both ControlNets. In such scenarios, you can configure the scales as [..., 0] for option a) and [0, ...] for option b). This pull request addresses the issue of unnecessary inference of ControlNet model blocks when the scale is set to zero, optimizing performance for these cases.

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [x] Did you read the contributor guideline?
  • [x] Did you read our philosophy doc (important for complex PRs)?
  • [ ] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [ ] Did you write any new necessary tests?

Who can review?

@sayakpaul

cross-attention avatar Apr 05 '24 10:04 cross-attention

Thanks for your PR. Could we also have some code to establish the optimization obtained from your changes?

sayakpaul avatar Apr 05 '24 11:04 sayakpaul

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

import torch
from diffusers import ControlNetModel

controlnet = ControlNetModel.from_pretrained(
    "lllyasviel/control_v11p_sd15_openpose",
    torch_dtype=torch.float16,
).to("cuda:0")

single Controlnet inference

%%time
down_block_res_samples, mid_block_res_sample = controlnet(
    torch.randn((1, 4, 64, 64), device="cuda:0", dtype=torch.float16),
    controlnet_cond=torch.randn((1, 3, 512, 512), device="cuda:0", dtype=torch.float16),
    timestep=torch.tensor(0),
    encoder_hidden_states=torch.randn((1, 77, 768), device="cuda:0", dtype=torch.float16),
    conditioning_scale=0,
    return_dict=False,
)
CPU times: user 4.61 ms, sys: 4.18 ms, total: 8.79 ms
Wall time: 8.18 ms
%%time
down_block_res_samples, mid_block_res_sample = controlnet(
    torch.randn((1, 4, 64, 64), device="cuda:0", dtype=torch.float16),
    controlnet_cond=torch.randn((1, 3, 512, 512), device="cuda:0", dtype=torch.float16),
    timestep=torch.tensor(0),
    encoder_hidden_states=torch.randn((1, 77, 768), device="cuda:0", dtype=torch.float16),
    conditioning_scale=1,
    return_dict=False,
)
CPU times: user 320 ms, sys: 56.1 ms, total: 376 ms
Wall time: 375 ms

UPD: correct timeit results below

cross-attention avatar Apr 05 '24 12:04 cross-attention

That's nice. And without your changes the timing remains 375 ms-ish?

sayakpaul avatar Apr 05 '24 12:04 sayakpaul

I would be supportive of this feature.

sayakpaul avatar Apr 05 '24 12:04 sayakpaul

timeit gives more appropriate results) zero: 191 µs ± 5.62 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each) non-zero: 15.9 ms ± 268 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

GPU: A10G

cross-attention avatar Apr 05 '24 12:04 cross-attention

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar May 05 '24 15:05 github-actions[bot]

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Sep 14 '24 15:09 github-actions[bot]