David
David
@zhreshold hi, I did not find any NANs right now when training my custom datasets. BTW, I have trained like this for a long time. I think many people are...
@chinakook yep. The data transformers may change the gt bbox[-1,-1,-1,-1,-1] to [xx, xx, xx, xx, -1]. However, the class is still -1. So, when do label assignment, should remove this...
@chinakook But [200,200,200,200] is a special bbox, that is, a point with 0 width, 0 height. No anchor could match it. Invalid box should have area of 0. https://github.com/dmlc/gluon-cv/blob/master/gluoncv/model_zoo/rcnn/faster_rcnn/rcnn_target.py#L50
@chinakook ye. u are right. should do some changes in the target generator of faster rcnn. But right now, training yolov3 is ok.
> > @chinakook ye. u are right. should do some changes in the target generator of faster rcnn. But right now, training yolov3 is ok. > > Did solve the...
update; 53.8% mIoU....still far away...
The first question is why it's so fast. I built the model with 40ms inference time, with regard to 10ms which is mentioned in the original paper.
> of depthwise conv in pytorch lead to the much slower inference MXNET is the same, actually~~~
> > (and if it's using quantization under the hood, especially for the implicit quantization mode) > > TRT's MHA fusion does not support implicit quantizations yet. Please use explicit...
> @Aktcob Could you share your trtexec command and the ONNX? Also, could you try TRT 10.0.1.6 GA release and make sure you have enabled FP16? @nvpohanh Thanks for reply!...