Haohao
Haohao
> > Hi @CarkusL, can you give us the inference time for batch_size =1 of your TensorRT implementation including also the preprocess and postprocess? > > [09/15/2021-10:38:05] [I] PreProcess Time:...
@CarkusL Thanks for your great work, I wrote a new project based on your code, where computations of pre-process && post-process are done with Cuda, it runs much faster. Here...
sorry for my late reply, it may be related to the difference in GPU architectures, all my samples are tested on RTX3080. Also, generally, the first sample will take some...
Haven't received responses for a long time
here is a [repo](https://github.com/Abraham423/CenterFusion) to run CenterFusion, we provide rviz visualization codes
vTensorRT : 8.0.1.6 vUbuntu : 18.04 check it out ~
I skipped the preprocess part, it is achieved in preprocess.cpp/.cu
I can only guess the following reasons 1) pfe nn computation graph is simple, which only includes 2 groups of linear-bn1d-relu, float mode can already do a good job. 2)...
Yes it only supported torch ckpt if `--run_infer ` is enabled, if you want to compute evaluation metrics for trt engine results, you should previously run the cpp codes, and...
1. TensorRT is developed to optimize nn inference, the connection part between pfe & rpn (voxel assigning) involves no nn computing, I don't think TRT would optimize that part of...