Strange change of the precision@k
Hello! When I run the model of dynAE, using the dataset of "SBM", I get the result below.
P@2 P@10 P@100 P@200 P@300 P@500 P@1000 P@EdgeNum 0.000000 0.000000 0.100000 0.650000 0.770000 0.826667 0.876000 0.924000 - 0.000000 0.000000 0.100000 0.580000 0.755000 0.810000 0.868000 0.919000 - 0.000000 0.000000 0.000000 0.560000 0.730000 0.796667 0.864000 0.917000 - 0.000000 0.000000 0.000000 0.830000 0.895000 0.926667 0.946000 0.963000 - 0.000000 0.000000 0.100000 0.750000 0.835000 0.883333 0.922000 0.953000 -
I can't understand: 1. why the P@2 and the P@10 would always be zero? 2. why the P@k will decrease with increasing k?
We know that precision@k is the fraction of correct predictions in the top k predictions, so the result above is strange !!!
Hi, if you run the above experiments multiple times and take an average you will get the results you desire. The discrepancy you are seeing is because of stochasticity. Since p@2 only looks at top 2 links, it can only take values of 0, 0.5 and 1 in a single run.
Ok, you answer my first question, as you said, the zero value of P@2 is because of stochasticity.
but the second question?why the P@k will decrease with increasing k?
Are you saying why is P@k increasing with increasing k? Not the other way round right?
Are you saying why is P@k increasing with increasing k? Not the other way round right?
oh, yes