Explanation for Hybrid Transformer+Perceptron Network
❓ Questions and Help
We have a set of listed resources available on the website and FAQ: https://captum.ai/ and https://captum.ai/docs/faq . Feel free to open an issue here on the github or in our discussion forums:
###############################################
Is it possible to generate an explanation for a hybrid network of a combination of MLPs and transformers?
Example:
Network 1 = Multilayer perception which gets numerical columns (features) as input Network 2 = Sequence data input for transformer network 1 Network 3 = Sequence data input for transformer network 2 Concatenate 1= [Network1, Network2, Network 3] Dense layers following the concatenation Output Explanation format can be feature importance (for Network 1 input), word importance (for Network 2 input), and word importance (for Network 3 input). OR Standard explanation format across all columns+words together.