python-stanford-corenlp
python-stanford-corenlp copied to clipboard
String interning
Hey @arunchaganty ,
@jekbradbury and @bmccann recently discovered a huge performance oversight in another tokenization library by @jekbradbury. Namely, string interning improved DecaNLP performance by something like 100x. It dawned on me that we don't seem to do this for this python client? So the output annotations are storing a bazillion copies of words, gloss, tags, whitespaces etc? Can you confirm/deny this?
For reference the issue in question is here: https://github.com/jekbradbury/revtok/pull/4