gannn
gannn
follow the readme and expand the tokenizer on your own works well for me.
After reading the source code, i found that bentoml may convert my input to np.float32. But my onnx model takes int64 as its input. It may be a bug?
```python def inf(inputs): inputs = json.loads(inputs) inputs['input_ids'] = np.array(inputs['input_ids'], dtype = np.float32) inputs['input_mask'] = np.array(inputs['input_mask'], dtype = np.float32) inputs['input_seg'] = np.array(inputs['input_seg'], dtype = np.float32) print(np.shape(inputs['input_ids']), np.shape(inputs['input_mask']), np.shape(inputs['input_seg'])) outputs = onnx_runner.run.run(...
> +1 This bug can even be reproduced in a simple way. After creating a database connection, I insert three records, each with 768 dimensions. Immediately after, I let the...