[bug]: Text-inversion trainning gives OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'.
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Linux
GPU
cuda
VRAM
10GB
What happened?
Centos 9 upstream, fresh installed, webui works.
Tried text-inversion training of a simple case. It gives OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. Tried to follow the code, in transformers/tokenization_utils_base.py, force local_files_only = False, remove invokeAI/models/openai, then it starts to download files from hugginface, however, the problem became "config.json" not found. Can't think of anything else I could do.
Here's the command and result:
python3 ./main.py -t --base ./configs/stable-diffusion/v1-finetune.yaml --actual_resume /mnt/model-disk/liyong3/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt -n wangzuxian --gpus 0 --data_root /home/liyong3/photos/ -init_word 'wangzuxian'
Global seed set to 23
Running on GPUs 0
Loading model from /mnt/model-disk/liyong3/models/ldm/stable-diffusion-v1/v1-5-pruned-emaonly.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/liyong3/invokeAI/./main.py:726 in
Screenshots
No response
Additional context
No response
Contact Details
No response
Running into the same problem. I guess it has something to do with the new location. Hoped somebody took a look at this. Will investigate this further tomorrow.
okay easy fix:
Check your globals.py (ldm>invoke>globals.py). In my case the Global.root variable was still set to '.' .
I just replaced it with the new path of invoke.
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.