Multi-Task-Transformer icon indicating copy to clipboard operation
Multi-Task-Transformer copied to clipboard

Inquiry About Fusion Attention and Selective Attention Implementation in InvPT++

Open ierroric opened this issue 3 months ago • 0 comments

Hi there, First off, I want to say your work on InvPT++ is really impressive—it’s clear a lot of thought and effort went into it! I’ve been exploring the project recently and am particularly interested in the fusion attention and selective attention modules mentioned in InvPT++. However, after going through the codebase linked on GitHub, I haven’t been able to locate the implementation details for these two components. If you have some spare time, would it be possible for you to share the relevant code (or pointers to where it might be) for these attention mechanisms? It would be incredibly helpful for my own learning and research. Thanks so much for your time and for open-sourcing your work! Best regards,

ierroric avatar Oct 11 '25 12:10 ierroric