Jathushan Rajasegaran
Jathushan Rajasegaran
Hi @libo-coder, thank you! I think it is because it needs a metric (val_acc) to find the best model? Did you change any metric in the call_backs?
Thanks, could you send a pull request?
Hi, extremely sorry for the delay in replying the post, as I have to reproduce the problem in my side. Theoretically, It should work, but I also found that the...
@chengjianhong yes, If you need I can share the code.
Hi, @biprateep Thank you! I am not sure what could be wrong with different versions if you can share a TF2 implementation, I can run it and see any issues....
Hi the annotations for AVA will be out in a week. We will try to release the HMAR training code as soon as possible.
@xiaocc612, evaluation on AVA includes ~1.3k examples with shot changes. These sequences come from the validation set of AVA. The shot change detection is done automatically and the person bounding...
@somyagoel13 yes, the demo.py works on any youtube videos. We have released a fast online version in the PHALP repo, which also uses same 3D representations.
@xiaocc612 yes, our method works on monocular RGB images. We don't use any explicit depth information.
Hi @hacql2004, Sorry that I forgot to respond to this issue, the results you reported are for the last task?