VisualGLM-6B icon indicating copy to clipboard operation
VisualGLM-6B copied to clipboard

试用api出错,为啥下载了sat模型还是要连接huggingface??

Open lilinrestart opened this issue 2 years ago • 2 comments

image [2023-09-22 20:20:22,151] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-09-22 20:20:22,698] [INFO] webdataset not install, use pip to install, or you cannot use SimpleDistributedWebDataset and MetaDistributedWebDataset. [2023-09-22 20:20:23,097] [INFO] building VisualGLMModel model ... [2023-09-22 20:20:23,097] [INFO] [RANK 0] > initializing model parallel with size 1 [2023-09-22 20:20:23,098] [INFO] [RANK 0] You are using model-only mode. For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK. /home/ubuntu/.local/lib/python3.8/site-packages/torch/nn/init.py:405: UserWarning: Initializing zero-element tensors is a no-op warnings.warn("Initializing zero-element tensors is a no-op") [2023-09-22 20:20:29,704] [INFO] [RANK 0] > number of parameters on model parallel rank 0: 7802193408 [2023-09-22 20:20:52,361] [INFO] [RANK 0] global rank 0 is loading checkpoint /home/ubuntu/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt [2023-09-22 20:21:18,441] [INFO] [RANK 0] Will continue but found unexpected_keys! Check whether you are loading correct checkpoints: ['transformer.position_embeddings.weight']. [2023-09-22 20:21:18,443] [INFO] [RANK 0] > successfully loaded /home/ubuntu/.sat_models/visualglm-6b/1/mp_rank_00_model_states.pt Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/utils/hub.py", line 429, in cached_file resolved_file = hf_hub_download( File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/huggingface_hub/file_download.py", line 1291, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: Connection error, and we cannot find the requested files in the disk cache. Please try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "api.py", line 10, in model, tokenizer = get_infer_setting(gpu_device=gpu_number) File "/data/lilin/visualglm6b/VisualGLM-6B/model/infer_util.py", line 28, in get_infer_setting tokenizer = AutoTokenizer.from_pretrained("VisualGLM-6B/glm-model", trust_remote_code=True) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 701, in from_pretrained config = AutoConfig.from_pretrained( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 620, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict resolved_config_file = cached_file( File "/home/ubuntu/.local/lib/python3.8/site-packages/transformers/utils/hub.py", line 469, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like VisualGLM-6B/glm-model is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

lilinrestart avatar Sep 22 '23 12:09 lilinrestart

@Sleepychord @lykeven @cenyk1230 @1049451037

lilinrestart avatar Sep 22 '23 12:09 lilinrestart

VisualGLM-6B# python web_demo.py遇到了相同的问题, OSError: We couldn't connect to 'https://huggingface.co' to load this file

elesun2018 avatar Oct 12 '23 01:10 elesun2018