唐国梁Tommy
唐国梁Tommy
@iwtfnsc i 更改一下代码中的路径即可,很简单的操作。如果有疑问,我的QQ : 896165277
@1263451385 你的数据集中,有的文件夹里面没有图片,所以显示没有shape, 建议你重新把数据集拷贝一遍到项目中。我的QQ:896165277,有什么问题,可以交流。
建议你用docker下载镜像:docker pull pytorch/pytorch:1.11.0-cuda11.3-cudnn8-runtime ,然后启一个容器,在里面安装transformers==3.4.0 , 运行脚本就可以。亲测可行
> Do you referring to [this](https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/README.md#convert-pth-file-to-llava-model-in-xtuner-format-xtunerllava-llama-3-8b-v1_1) I have tried all of the finetuned methods. When you convert .pth models to HuggingFace or LLaVA model, there is an error `AssertionError: This...
> File "/home/ubuntu/program/xtuner_llava/xtuner-main/xtuner-main/xtuner/model/llava.py", line 420, in to_huggingface_llava assert getattr(self.llm, 'hf_quantizer', None) is None, AssertionError: This conversion format does not support quantized LLM. 请问你解决这个问题了吗?