InvokeAI
InvokeAI copied to clipboard
Add a workaround for broken sliced attention on MPS with torch 2.4.1
Summary
This PR adds a workaround for broken sliced attention on MPS with torch 2.4.1. The workaround keeps things working on MPS at the cost of increased peak memory utilization. Users who are unhappy with this should manually downgrade to torch==2.2.2.
Related Issues / Discussions
Bug report: https://github.com/invoke-ai/InvokeAI/issues/7049
QA Instructions
- [x] Test text-to-image on MPS
Checklist
- [x] The PR has a short but descriptive title, suitable for a changelog
- [x] Tests added / updated (if applicable)
- [x] Documentation added / updated (if applicable)