Fixed wrong dice computation
Dice computation was wrong for the case, when both groundtruth and prediction had predicted nothing (= all zeros tensor). It returned 2.0, which is not possible for dice score.
Before:
2 * ((GT * pred).sum() + 0.0001) / (GT.sum() + pred.sum() + 0.0001)
If both are 0:
2 * 0.0001 / 0.0001 = 2.0 => Wrong dice score, dice must be within 0 and 1
Now:
(2 * (GT * pred).sum() + 0.0001) / (GT.sum() + pred.sum() + 0.0001)
If both are 0:
(2 * 0 + 0.0001) / 0.0001 = 1.0 => Fixed
Also in general the way it was written, 0.0002 was always added to the numerator, while 0.0001 was added to the denominator, which means your dice is always a little bit higher than it should be.
why dice =1 when there is no class? Can't we calculate dice without that particular image?
why dice =1 when there is no class? Can't we calculate dice without that particular image?
You could skip all those images, where a class is not present and your model does not predict anything, and then get a dice score that is similar to F1-Score. Basically, whenever your model segments something or there is something on the groundtruth, you will compute the dice, which would include true positives, false positives and false negatives, but exclude true negatives.
In general, it is treated the way i described it, because, if there is nothing on the groundtruth and your model does not predict anything, then your model did theoretically predict the area correctly (true negative).
Anyways, in the given code it was dice=2, whenever neither are there, which just means that true negatives influence the mean dice a lot and make your model seem way better than it it actually is.
why dice =1 when there is no class? Can't we calculate dice without that particular image?
You could skip all those images, where a class is not present and your model does not predict anything, and then get a dice score that is similar to F1-Score. Basically, whenever your model segments something or there is something on the groundtruth, you will compute the dice, which would include true positives, false positives and false negatives, but exclude true negatives.
In general, it is treated the way i described it, because, if there is nothing on the groundtruth and your model does not predict anything, then your model did theoretically predict the area correctly (true negative).
Anyways, in the given code it was dice=2, whenever neither are there, which just means that true negatives influence the mean dice a lot and make your model seem way better than it it actually is.
ok