Kyle McGill

Results 50 comments of Kyle McGill

@pranavsharma We are currently testing this change in our CI. Thank you for your work so far on this!

Hi @nrepesh, can you please provide the exact command line for `docker run ...` and `tritonserver ...`.

> We are using a slew of Tensorflow, xgboost and Onnx models with warmups and batching Are you able to isolate the causing backed or does this only occur when...

> Would you suggest us to isolate the backends and possibly recreate the issue to identify if it's the isolated backends causing the problem? If possible, this would help us...

The structure which @dyastremsky is discussing would look something like ``` / / config.pbtxt 1/ model.onnx / config2.pbtxt 1/ model.onnx ```

Hi @zbh0323 Is your ` --model-repository=/root/gaea-serving/model/car_atti_0815/model/ ` pointing to your root directory for your models? Triton expects this path to be the [top level directory](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_repository.md#model-files) for all models to be...

Hi @JavanehBahrami, from the code you have provided it appears you might be unregistering all shared memory regions rather than the current one inside your for-loop: `triton_client.unregister_system_shared_memory()` and `triton_client.unregister_cuda_shared_memory()`. You...

Hi @badskeet Can you explain a little more about your use case please? The shared memory feature is used for communication between the client and server. BLS shouldn't need to...