CaicaiJason

Results 9 comments of CaicaiJason

![image](https://github.com/THUDM/VisualGLM-6B/assets/27617518/8945826d-9710-4c14-a35d-bfc36158406e) 我也碰到一样的问题了,请问解决了吗?

> > ![image](https://user-images.githubusercontent.com/27617518/243574954-8945826d-9710-4c14-a35d-bfc36158406e.png) 我也碰到一样的问题了,请问解决了吗? > > 这是啥数据集? 自己生成的一些数据

改一下dataset的方法,先把索引读进去,然后每个batch再读图片 原来给的方法一口气把数据集全读到内存里了,直接oom ``` class FewShotDataset(Dataset): def __init__(self, path, processor, tokenizer, args): self.max_seq_length = args.max_source_length + args.max_target_length with open(path, 'r', encoding='utf-8') as f: self.data = json.load(f) self.processor = processor self.tokenizer =...

![image](https://github.com/THUDM/VisualGLM-6B/assets/27617518/e20445bf-5604-4c1f-aacb-70b11cde3dee) 多轮对话的数据格式是什么样的?

> 你把prompt改成"这张图片里有苹果吗?\n答:有。\n问:有几个苹果?" 把label改成"有2个。" 就相当于训练了多轮对话的第二轮 明白了,所以多轮对话就是把历史的对话变成Prompt,下一轮回答变成label去训练,感谢大佬~~

但是还是有个疑问,如果是这样的形式,那么对话轮次一多,token的数量不断累加了,感觉效率上是不是太低了。 我看到的例如llava的多轮对话数据集,其实是这样的形式,是不是更合理一些? ![image](https://github.com/THUDM/VisualGLM-6B/assets/27617518/4615cabf-126d-4444-8d63-0de75fb92934) > 你把prompt改成"这张图片里有苹果吗?\n答:有。\n问:有几个苹果?" 把label改成"有2个。" 就相当于训练了多轮对话的第二轮 但是还是有个疑问,如果是这样的形式,那么对话轮次一多,token的数量不断累加了,感觉效率上是不是太低了。 我看到的例如llava的多轮对话数据集,其实是这样的形式,是不是更合理一些? ![image](https://github.com/THUDM/VisualGLM-6B/assets/27617518/4615cabf-126d-4444-8d63-0de75fb92934)

python demo_audiovideo.py --cfg-path eval_configs/video_llama_eval_withaudio.yaml --model_type llama_v2 --gpu-id 0 运行demo的时候失败了,配置是下面这样的,哪里出问题了吗? llama_model: "/group/30155/jasoncjxcai/Video-LLaMA/Video-LLaMA-2-7B-Finetuned/llama-2-7b-chat-hf" imagebind_ckpt_path: "/group/30155/jasoncjxcai/Video-LLaMA/Video-LLaMA-2-7B-Finetuned/imagebind_huge.pth" ckpt: '/group/30155/jasoncjxcai/Video-LLaMA/Video-LLaMA-2-7B-Finetuned/VL_LLaMA_2_7B_Finetuned.pth' # you can use our pretrained ckpt from https://huggingface.co/DAMO-NLP-SG/Video-LLaMA-2-13B-Pretrained/ ckpt_2: '/group/30155/jasoncjxcai/Video-LLaMA/Video-LLaMA-2-7B-Finetuned/AL_LLaMA_2_7B_Finetuned.pth' ![image](https://github.com/DAMO-NLP-SG/Video-LLaMA/assets/27617518/77ddbd44-fd62-41d0-852a-7b5785e7eb2f)

> Hi, how can I make the inference code to evaluate videos in batch? I naively concatenated the tensor in dimension 0 and get this error. > > ![image](https://private-user-images.githubusercontent.com/66267981/289263385-3820e0c1-0490-458f-bec5-3ab6b3982087.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTEiLCJleHAiOjE3MDI5NTk3MDAsIm5iZiI6MTcwMjk1OTQwMCwicGF0aCI6Ii82NjI2Nzk4MS8yODkyNjMzODUtMzgyMGUwYzEtMDQ5MC00NThmLWJlYzUtM2FiNmIzOTgyMDg3LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFJV05KWUFYNENTVkVINTNBJTJGMjAyMzEyMTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjMxMjE5VDA0MTY0MFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWY5NmYwZTE3NzQ2OGRkYjU5ZGMwY2YxOWIzNDFiNzI0NDQ5MjQwZWEyZGFhODQ0MjQ5MTQ4YzczNWYxNzRiMjUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.AXJv3jYG5_v8StRSCgrYbQI8KDPrwYiXtCqjlBy1Chk) >...