GLiNER icon indicating copy to clipboard operation
GLiNER copied to clipboard

Chunking for the 384 words limit

Open rjalexa opened this issue 1 year ago • 4 comments

What is the best way to chunk longer texts so each chunk fits under the 384 words (or 512 subtokens) ? My articles on the average are around 1200 tokens / 5000 chars approx Thank you

rjalexa avatar May 08 '24 12:05 rjalexa

Hi, I think that gliner-spacy (https://github.com/theirstory/gliner-spacy?ref=bramadams.dev) integrate a chunking function

Cc @wjbmattingly

urchade avatar May 08 '24 12:05 urchade

Hi all. Yes, Gliner spaCy handles the chunking for you. I kept it as an argument so that as the GliNER model improves (and can handle larger inputs), the package won't need to be updated.

wjbmattingly avatar May 08 '24 17:05 wjbmattingly

Thank you

rjalexa avatar May 09 '24 05:05 rjalexa

On that note, is it possible to use GLiNER SpaCy's chunking for finetuning GLiNER, Specifically the urchade/gliner_multi_pii-v1 model? I'm also dealing with large data.

abedit avatar May 10 '24 05:05 abedit

I believe there are a few of us working on gliner finetuning packages. I have one that's not ready yet, but I believe @urchade has made progress and has a few notebooks in this repository to get you started. In all these cases, you could use gliner spacy to help with the annotation process in something like Prodigy, from ExplosionAI. It's primarily what I use for annotating textual data because it works so easily with spaCy. You would then need to modify the output to align with the gliner finetuning approach. This is actually exactly what we did for the Placing the Holocaust project. You can see our GliNER finetuned model here: https://huggingface.co/placingholocaust/gliner_small-v2.1-holocaust

wjbmattingly avatar May 30 '24 09:05 wjbmattingly