captum icon indicating copy to clipboard operation
captum copied to clipboard

Example for LMGradientAttribution is missing.

Open saxenarohit opened this issue 2 years ago • 5 comments

📚 Documentation

This is in reference to the tutorial page below. https://captum.ai/tutorials/Llama2_LLM_Attribution

I could not find the example for LLMGradientAttribution for LLAMA2.

Any help on this will be appreciated.

Thanks

saxenarohit avatar Jan 31 '24 14:01 saxenarohit

@saxenarohit thanks for reminding us. We will add it soon

aobo-y avatar Jan 31 '24 18:01 aobo-y

Hi aobo, if you are busy, could you tell me which model layer need I take as a parameters in the LayerIntegratedGradients ?

Dongximing avatar Mar 22 '24 20:03 Dongximing

hi @Dongximing , it should be the embedding layer of your model. As a token is discrete, its backpropagate gradient stop at its embedding. For Llama2, it would something like the following

emb_layer = model.get_submodule("model.embed_tokens")
lig = LayerIntegratedGradients(model, emb_layer)

aobo-y avatar Mar 22 '24 21:03 aobo-y

Thanks, I saw the result, and analysis the code, the final results are computed on log_softmax. and is that means if a contribution in this way, -10,20,-20. the token_1 and token_2 are both important? or we need "abs()" to eval the important of tokens?

Dongximing avatar Mar 25 '24 20:03 Dongximing

Any tutorial update? @aobo-y

qingyuanxingsi avatar Jul 02 '24 12:07 qingyuanxingsi

Hi, we have finally added this in https://github.com/pytorch/captum/commit/e01c07b741be53bbf77d425725088cc0dd430005. Thanks again for bringing this to our attention!

craymichael avatar Aug 22 '24 18:08 craymichael