Pranav Sharma
Pranav Sharma
Doesn't this require a version upgrade?
+1 to this. We use azure devops for CI and i don't see the checks in the app.
> CC @pranavsharma does ORT provides API for doing so? Or can a ORT session be run for different inferences in parallel? Not fully following. What API are you looking...
fyi @jywu-msft
Any update on this?
We've not planned for it yet. Would you like to contribute?
Config.pbtxt seems wrong. I started triton with ```--strict-model-config=false``` and it loaded the model just fine. You can then query the [generated config](https://gist.github.com/pranavsharma/a2831134e374e76fa2c47510a82685cb) with [this URL](http://localhost:8000/v2/models/yolov3/config). Hope this helps!
It doesn't look like this model supports dynamic batching. The first dimension of the outputs ```yolonms_layer_1/ExpandDims_1,:0``` and ```yolonms_layer_1/ExpandDims_3:0``` is __1__, not -1. The error msg ```0330 19:07:41.647411 1 model_repository_manager.cc:1186] failed...
Several issues have been [reported](https://github.com/triton-inference-server/server/issues?q=is%3Aissue+%22but+shape+expected+by+the+model+is%22+is%3Aclosed) that cite this error.
The first thing to do is to check if the model can be inferenced using ORT alone (that is without Triton). Even there I would first check if this is...