GLiNER icon indicating copy to clipboard operation
GLiNER copied to clipboard

Converting to ONNX still depends on PyTorch

Open milosacimovic opened this issue 1 year ago • 2 comments

Is it possible to export to ONNX and run inference without depending on PyTorch?

milosacimovic avatar Jun 19 '24 13:06 milosacimovic

Thank you for pointing it out. You need to change a processor to rely on NumPy, plus rewrite a bit of conversion script to use ONNX instead of PyTorch. We will do it shortly, but any contribution from your side that can accelerate it is welcome.

Ingvarstep avatar Jun 19 '24 14:06 Ingvarstep

Do you know of any way of exporting the tokenizer into onnx as well, because right now it seems it is using torch as well through transformers. i.e. it's loaded using AutoTokenizer from transformers which relies on torch

milosacimovic avatar Jul 04 '24 09:07 milosacimovic