VideoMAE
VideoMAE copied to clipboard
[NeurIPS'22] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
**Short Description** By supporting [keras-v3](https://keras.io/keras_3/) the same codebases can be run on multiple backend, i.e. tensorflow, torch, and jax. **Other Information** - [ ] update packages from `tensorflow` to `keras-v3`....
May I ask if the final reconstructed video frame rate is the same as the original video frame rate between time-series downsampling? If they are the same, what is the...
Is it can handle live video input? With live predictions like And also can I use weights which I train with VideoMAE from Hugging-Face pipeline? They have this list of...