Context parallelism with MLA
I have a question regarding FusedAttention: Why doesn't it support context parallelism with MLA (Multi-head Layer Attention)? What are the technical limitations preventing this compatibility?"
Hi @SuperCB
You mean Multi-head Latent attention which is used by Deepseek? Technically, nothing should stop us from doing it, we just have not done it yet. Considering popularity of MLA/Deepseek, we should add this support for sure. We will do it. Thanks for bringing this to our attention.
I am working on it too. I found that the function AttnFuncWithCPAndQKVOA2A can support context parallelism for mla? Is my conclusion correct, and what are the main reasons currently preventing mla from supporting context parallelism?
Yeah, A2A implementation probably can work with MLA out of the box. AttnFuncWithCPAndKVAllGather might work for MLA also.
P2P cannot work because it concats K and V into a single tensor for communication, different head_dim of K and V prevents us from doing the concat, but this should be addressable.
As I said, technically, there should be no reason preventing MLA+CP, at least I do not know the reasons now, I might find something after I start to work on this.
I think we can support MLA+CP in P2P by padding the v value, which ensures minimal modifications to the original code. I am currently attempting to use this method.
Close by #1729 .