ms-swift icon indicating copy to clipboard operation
ms-swift copied to clipboard

使用lora进行resume from checkpoint时,无法加载模型

Open MindLostGuy opened this issue 1 year ago • 4 comments

Describe the bug deepspeed-zero3,lora_target_modules ALL,model_type phi3-vision-128k-instruct,多机多卡,在resume from checkpoint的时候,模型似乎无法加载。需要注意的是,此时的chekpoint文件夹内只包括lora相关的参数,但是报错显示模型在加载更多参数。

File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 1708, in _inner_training_loop deepspeed_load_checkpoint(self.model_wrapped, resume_from_checkpoint) File "/opt/conda/lib/python3.10/site-packages/transformers/integrations/deepspeed.py", line 402, in deepspeed_load_checkpoint load_path, _ = deepspeed_engine.load_checkpoint( File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2724, in load_checkpoint load_path, client_states = self._load_checkpoint(load_dir, File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2794, in _load_checkpoint self.load_module_state_dict(checkpoint=checkpoint, File "/opt/conda/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2587, in load_module_state_dict self.module.load_state_dict( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: Missing key(s) in state_dict: "base_model.model.model.embed_tokens.weight", "base_model.model.model.vision_embed_tokens.glb_GN", "base_model.model.model.vision_embed_tokens.sub_GN", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.embeddings.class_embedding", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.embeddings.patch_embedding.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.embeddings.position_embedding.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.pre_layrnorm.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.pre_layrnorm.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.k_proj.base_layer.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.k_proj.base_layer.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.v_proj.base_layer.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.v_proj.base_layer.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.q_proj.base_layer.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.q_proj.base_layer.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.out_proj.base_layer.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.self_attn.out_proj.base_layer.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.layer_norm1.weight", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.layer_norm1.bias", "base_model.model.model.vision_embed_tokens.img_processor.vision_model.encoder.layers.0.mlp.fc1.base_layer.weight", 省略

MindLostGuy avatar Jun 22 '24 11:06 MindLostGuy

每台机器的resume_from_checkpoint路径是对应机器的checkpoint

Jintao-Huang avatar Jun 25 '24 02:06 Jintao-Huang

每台机器的resume_from_checkpoint路径是对应机器的checkpoint

我在单机多卡训练时也遇到了同样的问题: RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM: Missing key(s) in state_dict: "base_model.model.vision_model.embeddings.class_embedding", "base_model.model.vision_model.embeddings.position_embedding", "base_model.model.vision_model.embeddings.patch_embedding.weight", "base_model.model.vision_model.embeddings.patch_embedding.bias", "base_model.model.vision_model.encoder.layers.0.ls1", "base_model.model.vision_model.encoder.layers.0.ls2", "base_model.model.vision_model.encoder.layers.0.attn.qkv.weight", "base_model.model.vision_model.encoder.lx x x] 我想恢复训练lora,请问有什么办法吗

yepzhang avatar Jul 21 '24 13:07 yepzhang

先确认一下是最新的版本 2.2.3 或者main分支

然后方便给一下复现的办法吗

Jintao-Huang avatar Jul 21 '24 17:07 Jintao-Huang

先确认一下是最新的版本 2.2.3 或者main分支

然后方便给一下复现的办法吗

这是我的bash脚本,我是先在另外一个dataset上面训练,然后在新的dataset上面resume,两个dataset的key是一致的

NPROC_PER_NODE=4
CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft
--model_type internvl2-8b
--model_id_or_path /InternVL-8B
--per_device_train_batch_size 4
--gradient_accumulation_step 2
--output_dir VTT/trec16_22
--dataset dataset/anno/internvl_train_aug_previous.json
--max_length 4096
--deepspeed default-zero2
--save_steps 500
--evaluation_strategy 'no'
--save_strategy 'steps'
--save_total_limit -1
--val_dataset dataset/anno/internvl_test_trec.json
--resume_from_checkpoint nternvl2-8b/v29-20240722-220921/checkpoint-2328
--num_train_epochs 2 \

yepzhang avatar Jul 26 '24 09:07 yepzhang