TransformerEngine
TransformerEngine copied to clipboard
Enable AttnFuncWithCPAndKVP2P to support mla
Description
The function AttnFuncWithCPAndKVP2Pdo not support mla(Multi-latent attention), because it concats K and V into a single tensor for communication, different head_dim of K and V prevents us from doing the concat.I attempted to pad v to align its head dimension with that of k, thereby enabling support for mla with minimal modifications.
issue
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refactoring
Changes
Please list the changes introduced in this PR:
Checklist:
- [x] I have read and followed the contributing guidelines
- [ ] The functionality is complete
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes