TensorRT Support
TensorRT is used in production ML systems. However, it adds another layer to the dependency hell across tensorrt/python/cuda/cudnn versions.
Right now the cleanest solution seems to be using the NVIDIA NGC provided container. It would be great to support this in your framework. Happy to contribute.
TensorRT docs:
- https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing
- https://docs.nvidia.com/deeplearning/tensorrt/archives/index.html
- https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html
TensorRT NGC Container docs:
- https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt
- https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/index.html
- https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/rel_22-04.html#rel_22-04
- https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html#framework-matrix-2022
@bfirsh
Going on two years here, but this would be awesome! The important thing here is that downloading weights and compiling would all happen at the build stage. As opposed to using pytorch compile requiring JIT compilation to happen when the setup function is called.
Also happy to contribute.
Any update on this?