BEVFormer_tensorrt icon indicating copy to clipboard operation
BEVFormer_tensorrt copied to clipboard

TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported: 1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) -> tensorrt.tensorrt.ICudaEngine

Open Tanzichang opened this issue 2 years ago • 10 comments

Hello,

I have got a error when I transport ONNX to TRT model by using sh samples/bevformer/tiny/onnx2trt.sh -d 0. The detailed log information is shown as following:

[02/25/2023-12:57:54] [TRT] [V] Loaded shared library libcublasLt.so.11 [02/25/2023-12:57:54] [TRT] [V] Using cublasLt as core library tactic source [02/25/2023-12:57:54] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +650, GPU +274, now: CPU 2219, GPU 2942 (MiB) [02/25/2023-12:57:54] [TRT] [V] Trying to load shared library libcudnn.so.8 [02/25/2023-12:57:54] [TRT] [V] Loaded shared library libcudnn.so.8 [02/25/2023-12:57:54] [TRT] [V] Using cuDNN as plugin tactic source [02/25/2023-12:57:55] [TRT] [V] Using cuDNN as core library tactic source [02/25/2023-12:57:55] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +178, GPU +266, now: CPU 2397, GPU 3208 (MiB) [02/25/2023-12:57:55] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.1.1 [02/25/2023-12:57:55] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. [02/25/2023-12:57:55] [TRT] [V] Constructing optimization profile number 0 [1/1]. [02/25/2023-12:57:55] [TRT] [E] 1: [constantBuilder.cpp::addSupportedFormats::32] Error Code 1: Internal Error (Constant output type does not support bool datatype.) [02/25/2023-12:57:55] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) Traceback (most recent call last): File "tools/bevformer/onnx2trt.py", line 262, in main() File "tools/bevformer/onnx2trt.py", line 257, in main calibrator=calibrator, File "/root/paddlejob/workspace/env_run/BEVFormer_Tensorrt/det2trt/convert/onnx2tensorrt.py", line 63, in build_engine engine = runtime.deserialize_cuda_engine(plan) TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported: 1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) -> tensorrt.tensorrt.ICudaEngine

Invoked with: <tensorrt.tensorrt.Runtime object at 0x7fae1710bd30>, None

Tanzichang avatar Feb 25 '23 05:02 Tanzichang

Please provide your environments and tell the difference between yours and README.

DerryHub avatar Feb 25 '23 12:02 DerryHub

Hi, I got a similar error. My environments: Tensorrt 8.5.1.7 cudnn 8.5.0 CUDA-11.6 onnx 1.12.0 torch 1.12.0

[02/28/2023-10:51:30] [TRT] [E] 1: [constantBuilder.cpp::addSupportedFormats::32] Error Code 1: Internal Error (Constant output type does not support bool datatype.)
[02/28/2023-10:51:30] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
Traceback (most recent call last):
  File "tools/bevformer/onnx2trt.py", line 255, in <module>
    main()
  File "tools/bevformer/onnx2trt.py", line 243, in main
    build_engine(
  File "/root/autodl-tmp/BEVFormer_tensorrt/./det2trt/convert/onnx2tensorrt.py", line 63, in build_engine
    engine = runtime.deserialize_cuda_engine(plan)
TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported:
    1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) -> tensorrt.tensorrt.ICudaEngine

Invoked with: <tensorrt.tensorrt.Runtime object at 0x7fdab6d48630>, None

HerrYu123 avatar Feb 28 '23 03:02 HerrYu123

Please give me your ONNX file and I will check it on my device.

DerryHub avatar Feb 28 '23 03:02 DerryHub

Thanks a lot! My onnx file is here

HerrYu123 avatar Feb 28 '23 06:02 HerrYu123

Thanks a lot! My onnx file is here

samples/bevformer/base/onnx2trt.sh requires a lot of GPU memory with tensorrt 8.5.1.7 as mentioned in README. Did you try bevformer small or tiny? I suggest you monitor GPU memory while running samples/bevformer/base/onnx2trt.sh. Or you can try small or tiny models or custom plugin version.

DerryHub avatar Mar 01 '23 03:03 DerryHub

Thanks a lot! My onnx file is here

samples/bevformer/base/onnx2trt.sh requires a lot of GPU memory with tensorrt 8.5.1.7 as mentioned in README. Did you try bevformer small or tiny? I suggest you monitor GPU memory while running samples/bevformer/base/onnx2trt.sh. Or you can try small or tiny models or custom plugin version.

Thanks for your reply. I used the RTX3090 with 24GB memory and didn't find out of GPU memory error while running samples/bevforemr/base/onnx2trt.sh. I'll try bevfomer small and tiny latter. By the way, I would appreciate it if you could provide me with a converted bevformer_base_trt file. Thanks again!

HerrYu123 avatar Mar 01 '23 04:03 HerrYu123

Please provide your environments and tell the difference between yours and README.

Thank you. My problem has been solved after setting the environment to be the same as yours。

Tanzichang avatar Mar 01 '23 07:03 Tanzichang

Thanks a lot! My onnx file is here

samples/bevformer/base/onnx2trt.sh requires a lot of GPU memory with tensorrt 8.5.1.7 as mentioned in README. Did you try bevformer small or tiny? I suggest you monitor GPU memory while running samples/bevformer/base/onnx2trt.sh. Or you can try small or tiny models or custom plugin version.

Thanks for your reply. I used the RTX3090 with 24GB memory and didn't find out of GPU memory error while running samples/bevforemr/base/onnx2trt.sh. I'll try bevfomer small and tiny latter. By the way, I would appreciate it if you could provide me with a converted bevformer_base_trt file. Thanks again!

You can change your tensorrt to 8.4.3.1 to convert bevformer base on 3090 as mentioned in README.

DerryHub avatar Mar 01 '23 08:03 DerryHub

Hello,

I have got a error when I transport ONNX to TRT model by using sh samples/bevformer/tiny/onnx2trt.sh -d 0. The detailed log information is shown as following:

[02/25/2023-12:57:54] [TRT] [V] Loaded shared library libcublasLt.so.11 [02/25/2023-12:57:54] [TRT] [V] Using cublasLt as core library tactic source [02/25/2023-12:57:54] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +650, GPU +274, now: CPU 2219, GPU 2942 (MiB) [02/25/2023-12:57:54] [TRT] [V] Trying to load shared library libcudnn.so.8 [02/25/2023-12:57:54] [TRT] [V] Loaded shared library libcudnn.so.8 [02/25/2023-12:57:54] [TRT] [V] Using cuDNN as plugin tactic source [02/25/2023-12:57:55] [TRT] [V] Using cuDNN as core library tactic source [02/25/2023-12:57:55] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +178, GPU +266, now: CPU 2397, GPU 3208 (MiB) [02/25/2023-12:57:55] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.1.1 [02/25/2023-12:57:55] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. [02/25/2023-12:57:55] [TRT] [V] Constructing optimization profile number 0 [1/1]. [02/25/2023-12:57:55] [TRT] [E] 1: [constantBuilder.cpp::addSupportedFormats::32] Error Code 1: Internal Error (Constant output type does not support bool datatype.) [02/25/2023-12:57:55] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. ) Traceback (most recent call last): File "tools/bevformer/onnx2trt.py", line 262, in main() File "tools/bevformer/onnx2trt.py", line 257, in main calibrator=calibrator, File "/root/paddlejob/workspace/env_run/BEVFormer_Tensorrt/det2trt/convert/onnx2tensorrt.py", line 63, in build_engine engine = runtime.deserialize_cuda_engine(plan) TypeError: deserialize_cuda_engine(): incompatible function arguments. The following argument types are supported: 1. (self: tensorrt.tensorrt.Runtime, serialized_engine: buffer) -> tensorrt.tensorrt.ICudaEngine

Invoked with: <tensorrt.tensorrt.Runtime object at 0x7fae1710bd30>, None

Hi, guy. I met the same problem as you while using sh samples/bevformer/tiny/onnx2trt.sh -d 0. Could you please tell me how you change your environment? Thanks a lot.

HerrYu123 avatar Mar 03 '23 02:03 HerrYu123

same problem, did you solve it?

shaoqb avatar Apr 17 '23 12:04 shaoqb