LMFlow icon indicating copy to clipboard operation
LMFlow copied to clipboard

.\scripts\run_chatbot.sh .\output_models\llama_7b_lora\ run error ........

Open NingBoHao opened this issue 2 years ago • 2 comments

(lmflow) PS E:\LMFlow-main\LMFlow-main> bash .\scripts\run_chatbot.sh .\output_models\llama_7b_lora
[2023-04-24 22:29:37,085] [WARNING] [runner.py:190:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only. Detected CUDA_VISIBLE_DEVICES=0: setting --include=localhost:0 [2023-04-24 22:29:37,119] [INFO] [runner.py:540:main] cmd = D:\UserSoftware\Anaconda3\envs\lmflow\python.exe -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None examples/chatbot.py --deepspeed configs/ds_config_chatbot.json --model_name_or_path .\output_models\llama_7b_lora
[2023-04-24 22:29:38,823] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0]} [2023-04-24 22:29:38,823] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=1, node_rank=0 [2023-04-24 22:29:38,823] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]}) [2023-04-24 22:29:38,823] [INFO] [launch.py:247:main] dist_world_size=1 [2023-04-24 22:29:38,823] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0 configs/ds_config_chatbot.json Traceback (most recent call last): File "E:\LMFlow-main\LMFlow-main\examples\chatbot.py", line 155, in main() File "E:\LMFlow-main\LMFlow-main\examples\chatbot.py", line 69, in main model = AutoModel.get_model( File "e:\lmflow-main\lmflow-main\src\lmflow\models\auto_model.py", line 16, in get_model return HFDecoderModel(model_args, *args, **kwargs) File "e:\lmflow-main\lmflow-main\src\lmflow\models\hf_decoder_model.py", line 229, in init self.tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path) File "D:\UserSoftware\Anaconda3\envs\lmflow\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 720, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "D:\UserSoftware\Anaconda3\envs\lmflow\lib\site-packages\transformers\tokenization_utils_base.py", line 1795, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for '.\output_models\llama_7b_lora'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '.\output_models\llama_7b_lora' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer. [2023-04-24 22:31:36,904] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 5084 [2023-04-24 22:31:36,926] [ERROR] [launch.py:434:sigkill_handler] ['D:\UserSoftware\Anaconda3\envs\lmflow\python.exe', '-u', 'examples/chatbot.py', '--local_rank=0', '--deepspeed', 'configs/ds_config_chatbot.json', '--model_name_or_path', '.\output_models\llama_7b_lora\'] exits with return code = 1

NingBoHao avatar Apr 24 '23 14:04 NingBoHao

bash .\scripts\run_chatbot.sh 后面接两个参数,第一个参数是llama-7b模型的地址,第二个参数是robin-7b lora的地址。 举个例子,可以参考: bash .\scripts\run_chatbot.sh pinkmanlove/llama-7b-hf .\output_models\llama_7b_lora


When running bash .\scripts\run_chatbot.sh, two parameters should be provided. The first parameter is the address of the llama-7b model and the second parameter is the address of the robin-7b lora. For example, you can use the following command as a reference: bash .\scripts\run_chatbot.sh pinkmanlove/llama-7b-hf .\output_models\llama_7b_lora

shizhediao avatar Apr 24 '23 14:04 shizhediao

python ./scripts/convert_llama_weights_to_hf.py --input_dir ${llama-path} --model_size 7B --output_dir ${llama-hf-path}/llama-7b-hf 我应该少执行这一步

NingBoHao avatar Apr 24 '23 15:04 NingBoHao

This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks

shizhediao avatar Jun 19 '23 10:06 shizhediao