transformers-interpret
transformers-interpret copied to clipboard
How to use transformers-interpret for sequencelabelling, for example layoutlmv3 or v3
I was testing it on layoutlmv3 and I am facing one error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-47-f0c042620a72>](https://localhost:8080/#) in <module>
----> 1 word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])
3 frames
[/usr/lib/python3.7/re.py](https://localhost:8080/#) in sub(pattern, repl, string, count, flags)
192 a callable, it's passed the Match object and must return
193 a replacement string to be used."""
--> 194 return _compile(pattern, flags).sub(repl, string, count)
195
196 def subn(pattern, repl, string, count=0, flags=0):
TypeError: expected string or bytes-like object
The code I am using is
from transformers_interpret import TokenClassificationExplainer
cls_explainer = ner_explainer = TokenClassificationExplainer(
model,
processor.tokenizer,
)
word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])
Hi, I have a similar use case with LayoutLMv3ForTokenClassification and LayoutLMv3Processor. Would it be possible to intepret these models for token classification for datasets like SROIE?