TensorRT_yolo3_module icon indicating copy to clipboard operation
TensorRT_yolo3_module copied to clipboard

You can import this module directly

Results 9 TensorRT_yolo3_module issues
Sort by recently updated
recently updated
newest added

我尝试了onnx版本为1.2.0可以成功转换onnx模型,但是我使用tensorrt7.0无法转换,我觉得可能是tensorrt版本的问题,另外我想将转后trt模型保存为engine,您有什么好的建议吗?还是我只需要修改保存文件名后缀即可

Did you test the project with yolo-v3(416*416)? $ python3 trt_yolo3_module_1batch.py Reading engine from file yolov3-608.trt Traceback (most recent call last): File "trt_yolo3_module_1batch.py", line 214, in output_dic_list = alpha_yolo3_unit.process_frame_batch(input_dic_list) File "trt_yolo3_module_1batch.py",...

In file trt_yolo3_module_1batch.py: Ln 49 a = torch.cuda.FloatTensor() #pytorch必须首先占用部分CUDA Why need pytorch first? Thank you.

Hi @Cw-zero, in [trt_yolo3_module_multibatch.py][line:150] you mentioned static class name 'Person'. It should be dynamically aligned like-wise yolo. right? If so, how can we? ` for b in boxes_k: x1=int(b[0]) x2=int(b[2])...

who can explain the "TensorRT inference time" and "After process time" for me?why it cost so much time?THANKS,and looking for your replay. /home/broliao/图片/2019-12-10 11-57-57 的屏幕截图.png

Thanks for sharing, could tell me how to run on camera? thank you

剪枝模型项目地址:https://github.com/Lam1360/YOLOv3-model-pruning 我最近一直在尝试用tensorrt加速yolo的剪枝,但是总是失败,所以想问问你,这个代码支不支持剪枝的yolo的操作

**Can you share your TensorRT&onnx&pytorch version?** When I run, I encounter the following errors,I suspect that different versions resulted. onnx 1.5 Tensorrt TensorRT-5.1.5.0 Traceback (most recent call last): File "weight_to_onnx.py",...

I use the weight_to_onnx.py from nvidia tensorrt sdk to convert yolov3.weights on Ubuntu system,and then use the converted yolov3.onnx file on Windows system,but the windows trt sdk module report parse...