Guenther Schmuelling
Guenther Schmuelling
I assume this was resolved.
Yes, TensorScatterAdd, TensorListConcatV2 are not mapped - we'd need to add it.
yes, we are aware of it and will fix it.
This model is a saved-model and --inputs should not be needed since tf2onnx will pickup the inputs and outputs from the saved-model. Sometimes you might to specify --tag if you...
model seems to be fine with seqlen=128: ``` import os import numpy as np import onnxruntime as rt def get_session(model_path): providers = ['CPUExecutionProvider'] if rt.get_device() == "GPU": gpus = os.environ.get("CUDA_VISIBLE_DEVICES")...
seqlen=64 was ok and I used v1. The default signature for v1 is ``` signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input_mask'] tensor_info: dtype: DT_INT32 shape: (-1, -1)...
There should not be a safe_remove_nodes() ever since if something points into the nodes to be removed - they be rewired by the caller because only the caller knows what...
Hm, this is odd - tf.device should do the trick. Let me test this a little. allow_growth=True might not be the right option because if you have a really large...
This loop is for the batch dim. NMS gets called in the subgraph which netron does not show in the main view. To see the the subgraph, click on a...
Not a trivial thing to optimize that loop correctly away when you switch to static shapes. You could to reconvert the model with tf2onnx - there is an option to...