Charles Kihn
Charles Kihn
cool
modelscope 1.7.1

In the official example, the two benchmarks each have their own weights VideoGPT-plus/MBZUAI/VideoGPT-plus_Phi3-mini-4k/mvbench VideoGPT-plus/MBZUAI/VideoGPT-plus_Phi3-mini-4k/vcgbench
step1 pretrain_projector_image_encoder.sh step2 pretrain_projector_video_encoder.sh step3 finetune_dual_encoder.sh step4 eval/vcgbench/inference/run_ddp_inference.sh step5 eval/vcgbench/gpt_evaluation/vcgbench_evaluate.sh So besides the above setp123. and step45, is there any other information or steps I missed?
``` from .dataset_config import * DataConfig = { "PRETRAINING": [CC3M_595K, COCO_CAP, COCO_REG, COCO_REC], "FINETUNING": [CONV_VideoChatGPT, VCG_HUMAN, VCG_PLUS_112K, CAPTION_VIDEOCHAT, CLASSIFICATION_K710, CLASSIFICATION_SSV2, CONV_VideoChat1, REASONING_NExTQA, REASONING_CLEVRER_QA, REASONING_CLEVRER_MC, VQA_WEBVID_QA], "VCGBench_FINETUNING": [CONV_VideoChatGPT, VCG_HUMAN, VCG_PLUS_112K, CAPTION_VIDEOCHAT,...
I didn't use VCGBench_FINETUNING and MVBench_FINETUNING. Will there be any problems?
 