Lanq_Yan

Results 6 comments of Lanq_Yan

@RunningLeon .I build custom ops, but still failed from ONNX to TENSORRT.Here are some infomation. ERROR info: **OSError: /home/lanq/pycharm/mmdeploy-master/build/lib/libmmdeploy_tensorrt_ops.so: undefined symbol: getPluginRegistry** Deploy args: '--deploy_cfg',default= '../configs/mmdet/_base_/base_tensorrt-fp16_dynamic-320x320-1344x1344.py', help='deploy config path' '--model_cfg',...

I use `target_link_libraries mmdeploy_tensorrt_ops`, but it doesn't work. is there anything that i should do to register?

I can load model in tensorrt's exec file `trtexec` by add `--plugins==libmmdeploy_tensorrt_ops.so`, so i think there has something wrong about register when i load model by tensorrt.

When I update device B's graphics driver to 536.23(device A's version), serialization is ok. But i can't understand this,my cuda version is 11.1, I think both versions of the driver...

> Driver version not incompatible with your cuda, I suggest you build and run on the same device. > > REF https://docs.nvidia.com/deeplearning/tensorrt/release-notes/index.html#rel_7-2-3 to see the right Compatibility set. But it...