Attention masks are missing in SD3 to mask out text padding tokens
Describe the bug
In the attention implementation of SD3, attention masks currently are not used. This will result in inconsistent outputs for the different values max_seq_length where padding exists in text tokens as the attention scores of padding tokens are non-zero. This issue has been discussed in https://github.com/huggingface/diffusers/discussions/8628, and is created to track the progress of fixing this problem.
Thanks @sayakpaul for the discussion.
Reproduction
n/a
Logs
No response
System Info
n/a
Who can help?
No response
Hi @sayakpaul, I am interested in working on this issue
Thanks for your interest! Sure, let’s go.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Hi guys, if no one working on this, I'm willing to pick this up 👍 @sayakpaul
Gentle ping to keep the activity going. @rootonchair Would you be able to contribute the fix?
Yes @a-r-r-o-w , I will open a PR soon
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.