ninono12345
ninono12345
Hello everyone. I have been working on a project trying to convert my pytorch model to tensorrt for faster inference. I have converted my model to onnx successfully, and run...
### Describe the issue When I run my model through this command: polygraphy surgeon sanitize tomp101_head_latest3.onnx --fold-constants -o latest3_sanitized2.onnx I get this: 2024-01-16 17:44:05.4688013 [W:onnxruntime:, unsqueeze_elimination.cc:20 onnxruntime::UnsqueezeElimination::Apply] UnsqueezeElimination cannot remove...
## ❓ Question Hello, I've encountered problems installing torch-tensorrt on Windows 10 No matter how I try, how many sources I look up to, there is no clear explanation on...
## Bug Description When compiling a pytorch model with torch-tensorrt, using different compiling methods I get different errors: using this code: head_jit = torch.jit.trace(om.head, idd) trt_head2 = torch_tensorrt.compile( head_jit, ir="ts",...
Hello @martin-danelljan I want to ask you for advice, since you have been more active. Is it possible so that for example if I'm tracking 4 objects at once, that...
## Description Hello everyone I am working on a pytorch object tracking model to convert it to tensorrt for faster inference When inferencing tensorrt with a single batch the model...
## Description Hi, I was going to use polygraphys converter to tensorrt and calibrator, but this model uses InstanceNormalization and the onnx parser flag has to be set: parser.set_flag(trt.OnnxParserFlag.NATIVE_INSTANCENORM), so...
Hello, when I was working with TensorRT 8.6 I had made an engine for inference in python. example inputs and engine: ` im_patches = torch.randn(batch, 3, 288, 288) train_feat =...