Zhang Yingjie

Results 10 comments of Zhang Yingjie

> Have you checked if the parameters for inference are the same that for val? E.g. conf, imgsz, IoU, etc. I _think_ val uses conf=0.25 and IoU=0.45 for the plots....

> You can pass `rect=False` to `model.val()` to make it match predict. > > [#2783 (comment)](https://github.com/ultralytics/ultralytics/issues/2783#issuecomment-2413530537) thanks for your advice i'll try

@liyan1997 蟹蟹!救了大命了

> You can find the arguments used by `model.val()` in the docs: https://docs.ultralytics.com/modes/val/#arguments-for-yolo-model-validation > > and also for `predict()`: https://docs.ultralytics.com/modes/predict/#inference-arguments > > `val()` uses `conf=0.001` and `iou=0.6` while `predict()` uses...

> And use rect=False with model.val Thanks but still the problem. ![UR TP76LD{FMVA4{5)J{ ~Q](https://github.com/user-attachments/assets/1e56137a-ffaa-47c0-ad48-7bc4c2194f02)

> I've noticed a problem with the data loading part of your code. Specifically, the lines: > > train_file_path = os.path.join(dataroot, 'ImageNet1K', 'protocols', 'ood_train.csv') test_file_path = os.path.join(dataroot, 'ImageNet1K', 'protocols', 'ood_test.csv')...

> Is the confidence of the predictions also saved? @Y-T-G Yes, the confidence of each prediction is saved following the YOLO annotation [cls, x, y, w, h]

> For the metrics calculation, you would need confidence values of the output. And they have to be saved with a confidence threshold of 0.001. Otherwise, mAP can't be calculated...

It seems that batch size causes a significant impact on the results. When I set batch size to 1, the results are perfectly reproduced. But it is so unacceptably slow!

> > 批次大小似乎对结果有重大影响。当我将批量大小设置为 1 时,结果会完美再现。但它的速度太慢了,令人无法接受! > > hello,dear can you provide your detailed config for the prefect reproduced?That is very import for me, i am looking forward for your reply...