Guy Tevet
Guy Tevet
It looks like some of the data was not parsed into the `.pt` files (using amass_parser.py), maybe some of the datasets were omited? Anyway, you can walk around this by...
Sharing the final .pt file will be the easiest also for me, but as far as I understand it will violate the copyrights of AMASS. If I misinterpret their license,...
If you want, you can contact the data publishers to verify it with them. If they do approve, I will upload the `.pt`s
Hi @Zessay ! What dataset did you use? please share the cmd you run to produce this result.
Pointing on the provided `data/vibe_data/vibe_model_w_3dpw.pth.tar` causes the same error.
Thanks @victory-12374! I'll clear myself - I have no problem running the demo as specified by the authors (`python demo.py --vid_file sample_video.mp4 --output_folder output/ --display `). I'm now trying to...
Thanks @clementapa ! I do agree:) This is on the planning and expected during November.
(1) Indeed MotionCLIP was trained using BABEL with simpler textual labels. We didn't train with HumanML and I expect it will yield better results. (2) No. Hope it helps:)
Did you train from scratch or run the pre-trained model?
Interesting. That shouldn't happen. Can you share some results?