InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Text-inversion trainning gives OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'.

Open ebziw opened this issue 3 years ago • 2 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues

OS

Linux

GPU

cuda

VRAM

10GB

What happened?

Centos 9 upstream, fresh installed, webui works.

Tried text-inversion training of a simple case. It gives OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. Tried to follow the code, in transformers/tokenization_utils_base.py, force local_files_only = False, remove invokeAI/models/openai, then it starts to download files from hugginface, however, the problem became "config.json" not found. Can't think of anything else I could do.

Here's the command and result:

python3 ./main.py -t --base ./configs/stable-diffusion/v1-finetune.yaml --actual_resume /mnt/model-disk/liyong3/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt -n wangzuxian --gpus 0 --data_root /home/liyong3/photos/ -init_word 'wangzuxian' Global seed set to 23 Running on GPUs 0 Loading model from /mnt/model-disk/liyong3/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt | LatentDiffusion: Running in eps-prediction mode | DiffusionWrapper has 859.52 M params. | Making attention of type 'vanilla' with 512 in_channels | Working with z of shape (1, 4, 32, 32) = 4096 dimensions. | Making attention of type 'vanilla' with 512 in_channels ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/liyong3/invokeAI/./main.py:726 in │ │ │ │ 723 │ │ │ config.model.params.personalization_config.params.initializer_words = [opt.i │ │ 724 │ │ │ │ 725 │ │ if opt.actual_resume: │ │ ❱ 726 │ │ │ model = load_model_from_config(config, opt.actual_resume) │ │ 727 │ │ else: │ │ 728 │ │ │ model = instantiate_from_config(config.model) │ │ 729 │ .....skip a few blocks... │ /home/liyong3/invokeAI/installer_files/env/envs/invokeai/lib/python3.9/site-packages/transformer │ │ s/tokenization_utils_base.py:1791 in from_pretrained │ │ │ │ 1788 │ │ │ ) │ │ 1789 │ │ │ │ 1790 │ │ if all(full_file_name is None for full_file_name in resolved_vocab_files.values( │ │ ❱ 1791 │ │ │ raise EnvironmentError( │ │ 1792 │ │ │ │ f"Can't load tokenizer for '{pretrained_model_name_or_path}'. If you wer │ │ 1793 │ │ │ │ "'https://huggingface.co/models', make sure you don't have a local direc │ │ 1794 │ │ │ │ f"Otherwise, make sure '{pretrained_model_name_or_path}' is the correct │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.

Screenshots

No response

Additional context

No response

Contact Details

No response

ebziw avatar Dec 08 '22 15:12 ebziw

Running into the same problem. I guess it has something to do with the new location. Hoped somebody took a look at this. Will investigate this further tomorrow.

YannickAaron avatar Dec 09 '22 17:12 YannickAaron

okay easy fix: Check your globals.py (ldm>invoke>globals.py). In my case the Global.root variable was still set to '.' . I just replaced it with the new path of invoke.

YannickAaron avatar Dec 09 '22 17:12 YannickAaron

There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.

github-actions[bot] avatar Mar 16 '23 06:03 github-actions[bot]