[BUG]not support baichuan-7b
pretrain is trust_remote_code errorγ
Hi,
Please modify the file https://github.com/OptimalScale/LMFlow/blob/main/src/lmflow/models/hf_decoder_model.py
add a new argument trust_remote_code =True.
For example,
Original:
config = AutoConfig.from_pretrained(pretrained_model_dir, torch_dtype=torch.float16)
self.backend_model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
torch_dtype=torch_dtype,
)
Modified:
config = AutoConfig.from_pretrained(pretrained_model_dir, trust_remote_code=True, torch_dtype=torch.float16)
self.backend_model = AutoModelForCausalLM.from_pretrained(
model_args.model_name_or_path,
config=config,
torch_dtype=torch_dtype,
trust_remote_code=True,
)
thanks
what if my baichuan is at my local machines? i got the same error like this. If i add trust_remote_code=True, it will load my/root/.cache/huggingface/modules/transformers_modules/to find baichuan.
In that case, you may replace your model names with your local model paths, for example,
./scripts/run_finetune.sh \
--model_name_or_path output_models/your-baichuan-model \
--dataset_path data/alpaca/train \
--output_model_path output_models/finetuned-baichuan-7b
By the way, the latest version of LMFlow in main has supported --trust_remote_code arguments in command lines now. Hope that can solve the issue π
after use the latest version of LMFlow in main which commit id is c530a6f28de94f3b83a2a4b4ff4dbc96529c0503, and i reinstalled my env by pip install -r requirements.txt, now i am able to fine tune baichuan7b-2π, although fine tune baichuan7b-2 use lora is not supported yet.
anyway, thanks a lot! π