ferret
ferret copied to clipboard
A python package for benchmarking interpretability techniques on Transformers.
Add prediction in batches for LOO&faithfulness evaluations (model wrapper: _get_class_predicted_probabilities_texts)
* ferret version: 0.4.1 * Python version: 3.9 * Operating System: MacOS ### Description I'm following the getting started of ferret framework on macOS and counter several technical issues (due...
Thanks for rolling out an excellent model explainer package. Is there any plan for future to support seq2seq models?
When we visualize faithfulness and plausibility metrics we use a color scheme where, generally, the darker the cell, the better. However, color ranges are unclear and seem to be different...
As far as I know, [LIME library](https://github.com/marcotcr/lime) generates rationales for the vocabulary of the sentence i.e. relevance values are predicted for each unique subtoken. For example: ``` exp = explainer.explain_instance("Hello...
* ferret version: 0.4.0 * Python version: 3.9.2 * Operating System: Linux Debian ### Description When comparing your feature attribution scores of the explanation provided by Integrated Gradients (plain) with...
Part of our API accepts human rationale annotations to evaluate explanations on plausibility metrics. It is reasonable to assume that human workers will annotate text with such rationales using open-source...
Our current visualization is based on styling pandas tables. This is fine with short sequences where we need to inspect a few items: table visualization does not fit well with...
* ferret version: 0.4.1 * Python version: 3.10.9 * Operating System: Ubuntu 20.04.5 LTS ### Description Describe what you were trying to get done. I am loading ferret's explainer with...