RunningLeon
RunningLeon
@FDInSky Pls post your full script and the env with `python tools/check_ev.py`
@ymw123 Hi, 1. please refer to [here](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/tutorials/how_to_measure_performance_of_models.md) for how we test latency of backend models. The latency [benchmark](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/benchmark.md) could be viewed as reference. 2. Could you share how you test...
@ymw123 Hi, sorry for the change. 1. link for how to profile model: https://mmdeploy.readthedocs.io/zh_CN/latest/02-how-to-run/profile_model.html 2. link for benchmark : https://mmdeploy.readthedocs.io/zh_CN/latest/03-benchmark/benchmark.html
> Hi, it depends on the pipeline of model config. Clearly, the input image is not preprocessed to shape 352x352. You may need to set `Resize=(352,352), keep_ratio=False` in the config.
> @RunningLeon > > Is there any problem using the code below to test the speed of a single image? The speed does not include the time to load the...
@ymw123 Hi, you could exclude `create_input` step. For pytorch model, did you also test on the jetson?
@Luwill6 Hi, pls. build custom ops for tensorrt. You could refer to https://mmdeploy.readthedocs.io/en/latest/05-supported-backends/tensorrt.html#build-custom-ops
@Luwill6 Hi, sorry for the late reply. Seems TensorRT is not correctly installed, there might be multiple version of TensorRT. Pls. follow the [instruction](https://mmdeploy.readthedocs.io/en/latest/05-supported-backends/tensorrt.html#install-tensorrt) and retry. > @RunningLeon .I build...
closed for no activity in a long time
@habjoel Hi, could post here the full script you are running? Have you added `--device cuda:0` to `tools/deploy.py`?