QuarTerll
QuarTerll
> @QuarTerll sorry for the confusion. Confusion matrix uses default inference thresholds of 0.25 conf 0.45 iou while console results are output at max F1 confidence which varies per class,...
@glenn-jocher BTW, why using 0.001 to validate while training ? If so, what thresholds should I use in detect.py? Should I use 0.001 as well? In normal case, I thought...
@glenn-jocher Thanks for your detailed reply, which really helped me a lot of about my confusions. So, We get almost all detections while training and validating in training for the...
BTW, before today, I just thought the two thresholds in val.py is same as them in detect.py. LOL But in fact, there are some small differences. hmmmmm For other beginners...
@glenn-jocher gocha. Thanks for you patience. Now I have more understanding about yolov5. :)
> @QuarTerll 0.25 conf 0.45 IoU is standard detections thresholds across tools/platforms. > > val.py goes a bit further and examines the best F1 for each class. You can see...
And another question, If so, why you guys don't use normalization? The reason ask this is that my f1-score shows 10% differences at most when I use normalization with your...

@junkmd Can u help me
> I do not have Windows 11. > > However, in the case of a software I use, I have noticed that the process ID of the `.exe` file that...