向峻宏
向峻宏
### Checklist - [X] I have searched related issues but cannot get the expected help. - [x] 2. I have read the [FAQ documentation](https://github.com/open-mmlab/mmdeploy/tree/main/docs/en/faq.md) but cannot get the expected help....
请问一下为什么yolov8 end2end的推理,warmup和没有warmup时间相差这么多吗?没有warmup大概700ms,有warmup只有6ms,Infer函数不都是执行的图片放入gpu,然后推理结果从gpu放到内存吗?难道是warmup之后,图片不需要从内存到gpu了吗? 另外,如果我是处理视频,需要实时从视频帧读取到gpu,这应该是没办法用warmup吧?这应该怎么处理?
代码有误
代码写的有问题,没考虑裁剪图中没有标注点的情况,而且多卡训练也没有实现
I have a question. Is the training data used by the author the same as the one released? I trained my own model, but it has overexposure issues, for example:...
Hello, author. Thanks for your work! I found that there is a problem with your code. The length of self.feature_loss_module should be 1, which is not equal to the length...
Can I use the sample data provided in the bench to evaluate my own pipeline? However, I found that the PDF annotation in dataset.jsonl is too simple. What is the...
I encountered the following problem when using llama cpp python on Mac. The model's answer is completely unreasonable. The red box contains the question and answer.  The configuration is...
I deployed the InternVL3_5-241B-A28B model and tested it on the mmstar benchmark, but the accuracy I got was only 73.76%. I'd like to know how the paper came up with...