AnyNet
AnyNet copied to clipboard
Inference time evaluation input size and TX2 backend
Hi@v@mileyan, I want to know if the input image size is 1242x375 or 1232x368 since 1242x375 cannot run forward through the model. Another question is the same as #18. Could plz tell if you evaluate your model inference time on the TensorRT engine or not? Thank you for your reply and excellent code!
The input size is 1232x368. The image is cropped before passing through the model. You can see this is Dataloader/KITTILoader.py