chalesguo
chalesguo
I encounter the same problem, any solutions?
Encountered the same problem, there is no solution?
File "/home/ubuntu/program/xtuner_llava/xtuner-main/xtuner-main/xtuner/model/llava.py", line 420, in to_huggingface_llava assert getattr(self.llm, 'hf_quantizer', None) is None, \ AssertionError: This conversion format does not support quantized LLM.
  如何选择projector_weight文件
原因找到了,内存需求大,增加虚拟内存容量就可以解决。
增加虚拟内存。
自动升级到v2.2.22后就无法翻译选中内容,退回到v2.2.13就正常。