Brench
Brench
> > > 我也是类似的报错,你最后解决了吗 @Rocky77JHxu > > > > > > 我最终采用了ms-swift框架来确保能够先完成任务。 > > 噢噢感谢!但是我用ms-swift也有类似的错误,用的命令是 `swift eval --model_type llava1_5-7b-instruct --eval_dataset POPE`,连接openai超时了。但我本地已经下载好模型和数据集了,不知道您有设置其他的东西让它不要连接openai吗,而是直接用本地的模型? 在.env文件中设置本地模型部署服务的`base_url`和`key`
I had verified that Mantis-CLIP can be deployed by VLLM. But Mantis-SigLIP has a problem as follows: In the config of Mantis-SigLIP, `image_size = 384 and patch_size = 14`, but...
> > 在 .env 设置 `LMUData=` 文件夹路径 > > .env在哪儿呀,没找到 自己创建就行了~
I think maybe you can use vllm-ascend to run the test of the api service
支持的,可以用vllm-ascend部署好,然后走api的形式评测
> > 支持的,可以用vllm-ascend部署好,然后走api的形式足球 > > 大佬,有没有代码示例 vllm_ascend:https://github.com/vllm-project/vllm-ascend.git (it just like normal vllm serve) 看这里 https://github.com/open-compass/VLMEvalKit/blob/main/docs/zh-CN/Quickstart.md 配置好你的base url 和 api_key 跑测试就好了
```python self.model = MODEL_CLS.from_pretrained( model_path, torch_dtype='auto', , attn_implementation='eager', load_in_8bit=True, low_cpu_mem_usage=True, max_memory={0: "15GiB", 1: "15GiB", 2: "15GiB", 3: "15GiB", "cpu": "40GiB"} ) ``` from the error message, maybe you should adjust...