Can't load tokenizer for 'openai/clip-vit-large-patch14'
# streamlit run scripts/demo/sampling.py --server.port 8000
Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.
You can now view your Streamlit app in your browser.
Network URL: http://172.113.1.9:8000
External URL: http://116.52.2.62:8000
Global seed set to 42
Global seed set to 42
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 2. Setting context_dim to [2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 2. Setting context_dim to [2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 10. Setting context_dim to [2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 2. Setting context_dim to [2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 2. Setting context_dim to [2048, 2048] now.
SpatialTransformer: Found context dims [2048] of depth 1, which does not match the specified 'depth' of 2. Setting context_dim to [2048, 2048] now.
2023-11-29 09:38:52.288 Uncaught app exception
Traceback (most recent call last):
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 264, in _get_or_create_cached_value
cached_result = cache.read_result(value_key)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_resource_api.py", line 500, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 312, in _handle_cache_miss
cached_result = cache.read_result(value_key)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_resource_api.py", line 500, in read_result
raise CacheKeyNotFoundError()
streamlit.runtime.caching.cache_errors.CacheKeyNotFoundError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 534, in _run_script
exec(code, module.__dict__)
File "/cmdata/docker/yfq/generative-models/scripts/demo/sampling.py", line 278, in <module>
state = init_st(version_dict, load_filter=True)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 212, in wrapper
return cached_func(*args, **kwargs)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 241, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 267, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 321, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/cmdata/docker/yfq/generative-models/scripts/demo/streamlit_helpers.py", line 46, in init_st
model, msg = load_model_from_config(config, ckpt if load_ckpt else None)
File "/cmdata/docker/yfq/generative-models/scripts/demo/streamlit_helpers.py", line 86, in load_model_from_config
model = instantiate_from_config(config.model)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/models/diffusion.py", line 59, in __init__
self.conditioner = instantiate_from_config(
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/modules/encoders/modules.py", line 79, in __init__
embedder = instantiate_from_config(embconfig)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/sgm/modules/encoders/modules.py", line 348, in __init__
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1768, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
^C Stopping...
^C
# ls
CODEOWNERS LICENSE-CODE README.md assets checkpoints2 configs data dist docker-compose.yml main.py model_licenses pyproject.toml pytest.ini requirements run.txt scripts sgm start.sh tests
my local directory not have same name, Can you help me!
Same problem when running turbo.py
Same problem when running turbo.py
I try set Internet and make sure it can connect foreign web. code auto download open_clip weights, It's usefull for me. maybe you also need remove some same name files(/root/.cache/huggingface/hub/******)
Same problem when running turbo.py. And when I change the weights to local ones, it still downloads at huggingface and shows "huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on."