Missing ground-truth annotations for the whole something-something-v2 dataset
Thanks for your excellent work.
I have noticed that the total number of annotated instance is 180049, which is less than the original something-something-v2 dataset (train + validation = 168913 + 24777 = 193690). I wonder why missing the 13641 instances's annotation?
And as you compare results on original something-something-v2 dataset with some other SOTA methods using the ground-truth annotations (Table 2 in your paper), I wonder how you get the results with the incomplete annotations?
Looking forward for your reply. Thanks.
I am noticing the same, and I'm very perplexed. It would be greatly appreciated if @joaanna could help us understand. It's OK to exclude videos. We just want to know why they were excluded.