CoCLR
CoCLR copied to clipboard
[NeurIPS'20] Self-supervised Co-Training for Video Representation Learning. Tengda Han, Weidi Xie, Andrew Zisserman.
When i try : CUDA_VISIBLE_DEVICES=0,1,2 python main_classifier.py --net s3d --dataset ucf101 --seq_len 32 --ds 1 --batch_size 32 --train_what last --epochs 30 --schedule 60 80 --optim sgd --lr 1e-1 --wd 1e-3...
How can I get the two-stream fearture? And the rgb pretrained model and flow model can be use to extract two-sream feature? How can I input the command?
Hi, thanks for your great work, but I am wondering why you use color jitter to augment the inputs when testing (L463 in eval/main_classifier.py)?
Good afternoon, `python main_classifier.py --net s3d --dataset ucf101 --seq_len 32 --ds 1 --batch_size 32 --train_what last --epochs 100 --schedule 60 80 --optim sgd --lr 1e-1 --wd 1e-3 --pretrain ../feature/CoCLR-k400-rgb-128-s3d.pth.tar` After...