Michael Benayoun

Results 32 comments of Michael Benayoun

So you also need to provide the inputs for the decoder side, something along the lines: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from onnxruntime import InferenceSession tokenizer=AutoTokenizer.from_pretrained("opus-mt-en-zh") session = InferenceSession("opus-mt-en-zh-onnx-301/model.onnx")...

Basically, if you want to your ONNX model to predict the next token, provide the start of sentence token as first token, maybe you do not have `tokenizer.bos_token_id` but you...

I think it's safe to only run those tests on CPU. Also, when running locally it takes ~ 1 min (althought I agree my machine might be more powerful).

I think it's okay now with the changes you've made!

I will check on Monday and come back to you, it should be easily fixable I think.

Yes, I think the original feature names were chosen by looking at the classes names (`BertForSequenceClassification`, etc). I think @fxmarty's first suggestion could work and is easy to implement.

Hi @jessecambon, This seems related to the ONNX export. We are currently working on adding support for this in `optimum`, and providing a tokenizer will not be needed to perform...

Yes to both, feel free! I updated the list saying that you are working on it.

The `TasksManager` allows to map model classes to export configuratons, here ONNX ones. Registering your ONNX config will make it possible for you to use it with the CLI and...

If you do it programatically I do not think you need to register anything. What's your model? You put `bert` here, but `bert` is already registered for ONNX so nothing...