LLaMA-Adapter icon indicating copy to clipboard operation
LLaMA-Adapter copied to clipboard

KeyError: 'adapter_query.weight' on finetuned adaptor

Open dittops opened this issue 2 years ago • 0 comments

I have finetuned 7b with the command

torchrun --nproc_per_node 4 finetuning.py --model Llama7B_adapter --llama_model_path /root/llma-64/ --data_path /root/alpaca-lora/alpaca_data.json --adapter_layer 30 --adapter_len 10 --max_seq_len 512 --batch_size 4 --epochs 5 --output_dir ./output/

I have got the adaptor file in the output directory. When I tried to load adaptor with this command

torchrun --nproc_per_node 1 example.py --ckpt_dir /root/llma-64/7B/ --tokenizer_path /root/llma-64/tokenizer.model --adapter_path alpaca_finetuning_v1/output/checkpoint-4.pth

I'm getting this error below.

image

Any thought on what might have gone wrong with the finetuning?

dittops avatar May 08 '23 08:05 dittops