captum icon indicating copy to clipboard operation
captum copied to clipboard

Add PartitionSHAP or other fast attribution method

Open LennardZuendorf opened this issue 2 years ago • 1 comments

šŸš€ Feature

Please consider adding PartitionSHAP (or another fast attribution method)

Motivation

The currently available attribution methods run very long for large models like llama2 or mistral.

Pitch

The PartitionSHAP implementation from shap runs only minutes on a model like Mistral or GPT-2.

It is their best-performing explainer (runtime-wise), especially for text generation. It runs significantly quicker than any other method. The attributions are a bit less accurate the performance is very good.

As far as I know, it's the fastest model-agnostic explanation approach. Anything else using owen values should also be very fast.

Alternatives

  • Any other super quick attribution method would be very welcome šŸ˜„
  • I've listed fastSHAP below, which is also very fast but not proven on any LLMs.

Additional context

This is the PartitionSHAP implementation from the shap package. There's also fastSHAP, though I am not sure how applicable it would be to LLMs.

LennardZuendorf avatar Jan 02 '24 23:01 LennardZuendorf

Hello, I’d like to implement PartitionSHAP, following the Captum API. Proposed class: captum.attr.PartitionShap. Will mirror ShapleyValueSampling but use the hierarchical clustering / greedy partitioning trick from Lundberg & Erion (2021). ETA: two weeks. Feedback welcome!

sisird864 avatar Jul 13 '25 14:07 sisird864