Charles Kihn

Results 9 comments of Charles Kihn

![img_v3_02bq_8ead65cb-970b-45f7-ad83-40590b86828g](https://github.com/OpenGVLab/VideoMAEv2/assets/142862250/0d8d9a29-5665-4974-8c78-4ca354538bb4)

In the official example, the two benchmarks each have their own weights VideoGPT-plus/MBZUAI/VideoGPT-plus_Phi3-mini-4k/mvbench VideoGPT-plus/MBZUAI/VideoGPT-plus_Phi3-mini-4k/vcgbench

step1 pretrain_projector_image_encoder.sh step2 pretrain_projector_video_encoder.sh step3 finetune_dual_encoder.sh step4 eval/vcgbench/inference/run_ddp_inference.sh step5 eval/vcgbench/gpt_evaluation/vcgbench_evaluate.sh So besides the above setp123. and step45, is there any other information or steps I missed?

``` from .dataset_config import * DataConfig = { "PRETRAINING": [CC3M_595K, COCO_CAP, COCO_REG, COCO_REC], "FINETUNING": [CONV_VideoChatGPT, VCG_HUMAN, VCG_PLUS_112K, CAPTION_VIDEOCHAT, CLASSIFICATION_K710, CLASSIFICATION_SSV2, CONV_VideoChat1, REASONING_NExTQA, REASONING_CLEVRER_QA, REASONING_CLEVRER_MC, VQA_WEBVID_QA], "VCGBench_FINETUNING": [CONV_VideoChatGPT, VCG_HUMAN, VCG_PLUS_112K, CAPTION_VIDEOCHAT,...

I didn't use VCGBench_FINETUNING and MVBench_FINETUNING. Will there be any problems?

![image](https://github.com/user-attachments/assets/bb712643-5072-4f98-b5b1-86806a8844ae) ![image](https://github.com/user-attachments/assets/b48ab561-564b-4a78-b569-a01965acf388)