tensorrt-utils
tensorrt-utils copied to clipboard
⚡ Useful scripts when using TensorRT
I am getting this error while compiling for int8. The same code works fine for FP32 and FP16 ``` [05/31/2022-05:46:12] [TRT] [E] 1: Unexpected exception Traceback (most recent call last):...
Hello, thank you for your work!  Do I have a problem with int8-engine inference this way? I loaded the engine file directly, but didn't use the calibration table, I'm...
Hello, I am trying to run an onnex model using tensorrt backend, but I get the following error. KeyError: 'output1_before_shuffle' ``` model = onnx.load(args.files) onnx.checker.check_model(model) input_shapes = [[d.dim_value for d...
https://github.com/rmccorm4/tensorrt-utils/blob/2c49b8404a5de3fe746716ff5e5ccf1755815819/int8/calibration/onnx_to_tensorrt.py#L89 Your utils are very helpful please take my appreciation.
## Description Hi, @rmccorm4 Currently, I'm trying to generate INT8 TRT engine with calibrations, like that `calibrator = Calibrator(data_loader=calib_data(), cache="identity-calib.cache") build_engine = EngineFromNetwork( NetworkFromOnnxPath("identity.onnx"), config=CreateConfig(int8=True, calibrator=calibrator) )` But I was...
## Description ## Environment **TensorRT Version**: 8.2 **GPU Type**: 2080ti **Nvidia Driver Version**: **CUDA Version**: **CUDNN Version**: **Operating System + Version**: **Python Version (if applicable)**: **TensorFlow Version (if applicable)**: **PyTorch...
Hello Im trying to convert onnx model with dynamic batch size created from darknet (https://github.com/WongKinYiu/ScaledYOLOv4) to tensorrt engine. I need to create calibrated int8 engine with static batch size 2....
Hi, I found this repo is very useful for helping understanding trt int8 functions. However, I don't quitly understand the usage for calibration-data which mentioned in the README that it...
I'm wondering how to do inference with the saved int8 trt engine file. Dose the process of inference is just the same as normal.
Hi @rmccorm4, I would like to ask some advice on int8 calibration. I've had no trouble building explicit batch engines where the batch > 1 with fp16 and I've managed...