Hierarchical CP implementation (Ulysses + Ring)
Description
This PR adds a hierarchical implementation of context parallelism to attention. It uses A2A communications in low-level CP groups (e.g., via NVLink), and P2P communications in high-level CP groups (e.g., via IBLink). For more details, please refer to LongVILA and USP.
This implementation supports:
- backends:
FusedAttention,FlashAttention - dtype: BF16/FP16 (both backends), FP8 (
FusedAttentiononly) - mask_type:
causal,no_mask - attention types: MHA, MQA/GQA
- qkv_format:
sbhd,bshd - bias type:
no_bias
Type of change
- [ ] Documentation change (change only to the documentation, either a fix or a new content)
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Infra/Build change
- [ ] Code refractor
Checklist:
- [x] I have read and followed the contributing guidelines
- [ ] The functionality is complete
- [x] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [x] My changes generate no new warnings
- [x] I have added tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
/te-ci pytorch
/te-ci pytorch
/te-ci pytorch
@xrennvidia thanks for the PR! I left a few comments and also edited the PR description a bit. Let me know if it's accurate. Thanks!
@xrennvidia thanks for the PR! I left a few comments and also edited the PR description a bit. Let me know if it's accurate. Thanks!
It's accurate, thanks.
/te-ci pytorch