Evalutaion isn't working correct
Hi, thank you for your code. It helped me a lot. I used part of your notebook file mask_rcnn.ipynb, but converted it to a .py file and split it up two two files: train and load+evaluation. Everything works so far, but only the evaluation isn't working at all.
This is the part of code, which doesn't work:
predictions = extra_utils.compute_multiple_per_class_precision(model, inference_config, dataset_test,
number_of_images=60, iou_threshold=0.5)
complete_predictions = []
for shape in predictions:
complete_predictions += predictions[shape]
print("Test", type(shape))
print("{} ({}): {}".format(shape, len(predictions[shape]), np.mean(predictions[shape])))
print("--------")
print("average: {}".format(np.mean(complete_predictions)))
When i use that part of code, this is the print:
Test <class 'str'>
knot (60): 0.0
--------
average: 0.0
My testset contains 60 images, and it take over 5mins after the loop is done, but this is the only print and i get an average of 0. Why is that?
Also, in your code, you sometimes use model.find_last()[1], but the [1] is wrong. When i load my model, with that i get errors. When i remove the [1], then it works fine.
When you need my whole code, i will copy it here.
Can somebody help me? I just need an mAP for my testset. I splited my Data into Trainingset, Validationset and Testset and trained my network for a long time. Now i want to know how good my network is. Therfore i need a precision value, mAP, regarding the prediction on my testset.