WangQi

Results 5 comments of WangQi

> Could you be sure that you run `accelerate config` command and arrange it properly before starting to train? The command I ran is as follows. ` accelerate launch --mixed_precision="fp16"...

> @Hellcat1005 It is difficult to debug this without a reproducible example. What dataset are you trying to use here? Is it a custom one? If you try running with...

I've encountered the same issue but I don't know what the cause is.

终于知道全流程应该是什么样的了。 目前看来是不能直接转huggingface格式,也不能直接转官方格式,都会报上面的错误。我目前是这样操作的: lora微调以后,执行: ` xtuner convert pth_to_hf path/to/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/finetune/llava_llama3_8b_qlora.py path/to/iter_12000.pth path/to/xtuner_model --safe-serialization ` 执行之后会生成xtuner_model这个文件夹,里面有llm_adapter文件夹和projector文件夹。 执行: ` xtuner convert merge path/to/models--meta-llama--Meta-Llama-3-8B-Instruct path/to/xtuner_model/llm_adapter path/to/llm_merge --safe-serialization ` 之后,执行configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/convert_xtuner_weights_to_hf.py: ` python convert_xtuner_weights_to_hf.py --text_model_id path/to/xtuner_model/llm_merge --vision_model_id...

同问。