[BUG/Help] p-tuningv2的时候报了No huggingface_hub attribute hf_api
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
尝试使用ptuningv2来微调模型,用的是提供的衣服数据集,用的是6b-int8的预训练权重(本地下载)
Expected Behavior
No response
Steps To Reproduce
我使用了本地下载的6b-int8权重,并覆盖了https://huggingface.co/THUDM/chatglm-6b-int8的clone,能正常部署cli_demo.py,
在尝试用ptuningv2微调时,将提供数据集解压放到此目录,训练参数如下
CUDA_VISIBLE_DEVICES=0 python3 main.py
--do_train
--train_file AdvertiseGen/train.json
--validation_file AdvertiseGen/dev.json
--prompt_column content
--response_column summary
--overwrite_cache
--model_name_or_path /home/Downloads/repo/ChatGLM/chatglm-6b-int8
--output_dir output/adgen-chatglm-6b-int8-pt-$PRE_SEQ_LEN-$LR
--overwrite_output_dir
--max_source_length 64
--max_target_length 64
--per_device_train_batch_size 1
--per_device_eval_batch_size 1
--gradient_accumulation_steps 16
--predict_with_generate
--max_steps 3000
--logging_steps 10
--save_steps 1000
--learning_rate $LR
--pre_seq_len $PRE_SEQ_LEN
--quantization_bit 4
报错如下
Traceback (most recent call last):
File "//Downloads/repo/ChatGLM/ChatGLM-6B/ptuning/main.py", line 27, in
Environment
- OS:ubuntu18
- Python:3.9
- Transformers:4.27.1
- PyTorch:1.12
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :true
Anything else?
No response
+1
+1