cap2vid
cap2vid copied to clipboard
Attentive Semantic Video Generation using Captions
I noticed that you manually added captions for KTH dataset. It still needs a lot of work. Could you please share the captions? Thank you very much!
Could you please elaborate on how to perform the actual cap2vid generation, since the testing phase generates only arbitrary videos in the given code.
They are mapped to unique integers while saving the captions in h5py file. However, before feeding them into the bidirectional are they represented as an embedding/fixed size vector? or the...
where can I get the caption label? Thank you.