diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

Fix Attention Mask Padding to Ensure Multiple of 8 Alignment

Open SahilCarterr opened this issue 1 year ago • 2 comments

What does this PR do?

Fixes #9637 resolve Attention Mask Padding Issue for Compatibility with xFormers

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [x] Did you read the contributor guideline?
  • [x] Did you read our philosophy doc (important for complex PRs)?
  • [x] Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [x] Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. @sayakpaul @yiyixuxu


Code from Issue

from diffusers.models.attention_processor import Attention, XFormersAttnProcessor
import torch

# Initialize the attention processor
attn_processer = XFormersAttnProcessor()

# Create the Attention module
attn = Attention(
    query_dim=256,
    heads=8,
    dim_head=64,
    processor=attn_processer,
).to(device="cuda", dtype=torch.bfloat16)

# Create dummy inputs
q = torch.zeros((2, 350, 256), device="cuda", dtype=torch.bfloat16)
kv = torch.zeros((2, 700, 256), device="cuda", dtype=torch.bfloat16)
attn_mask = torch.zeros((2, 1, 700), device="cuda", dtype=torch.bfloat16)

# Perform the attention operation
out = attn(q, kv, attn_mask)

# Print the output shape
print(out.shape)

Output

torch.Size([2, 350, 256])

Hardware Information

  • GPU: NVIDIA A100
  • Environment: Google Colab

SahilCarterr avatar Oct 15 '24 08:10 SahilCarterr

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

can you help to fix this error when i run the test script RuntimeError: expand(CUDABFloat16Type{[16, 1, 1, 278]}, size=[16, 1, 278]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4) . @sayakpaul . i have updated the test

SahilCarterr avatar Oct 17 '24 10:10 SahilCarterr