yohnyang
yohnyang
你好,我的环境是win10+cuda11.7+trt843+opencv470 这个工程编译了一周,一直报错,请问下您试过windows下vs2019或者其他工具进行编译吗
### Checklist - [ ] I have searched related issues but cannot get the expected help. - [ ] 2. I have read the [FAQ documentation](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/faq.md) but cannot get the...
### Checklist - [ ] I have searched related issues but cannot get the expected help. - [ ] 2. I have read the [FAQ documentation](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/faq.md) but cannot get the...
v8检测模型速度
通过v8检测模型测试视频推理,在RTX2080super上的infer速度表现为开始为2-3ms,执行几十帧之后infer速度会跳跃到10~20ms之间并不会再下来,这个可以帮忙再看一下吗
您好,我在使用VS2019进行编译时遇到了如下错误,可以帮忙看下吗 
速度对比
你好,这个速度比LoFTR真的快2.5倍吗? 我设置的参数为 opt fp16 时间在70ms左右,LoFTR在180ms左右,GTX2080super+cuda117 请问我还有可以调参优化的地方吗?
### What is the feature? Can you achieve the rtmpose deploy by openvino? I try to read the rtmpose.onnx or xml ,the infer but always has error who can help...
### What is the feature? 您好。可以给一个rtmpose通过tensorrt部署时用设置batchsize进行推理的例子吗 ### Any other context? _No response_