Not able to load after a successful login
Describe the bug
I am trying to load diffusers either from a remote. Remote huggingface diffusers is not accessible after a successful login
Reproduction
(pytorch)$ huggingface-cli login
_| _| _| _| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _|_|_|_| _|_| _|_|_| _|_|_|_|
_| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_|_|_|_| _| _| _| _|_| _| _|_| _| _| _| _| _| _|_| _|_|_| _|_|_|_| _| _|_|_|
_| _| _| _| _| _| _| _| _| _| _|_| _| _| _| _| _| _| _|
_| _| _|_| _|_|_| _|_|_| _|_|_| _| _| _|_|_| _| _| _| _|_|_| _|_|_|_|
To login, `huggingface_hub` now requires a token generated from https://huggingface.co/settings/tokens .
Token:
Login successful
Your token has been saved to /Users/dlituiev/.huggingface/token
from diffusers import StableDiffusionPipeline
model_loc = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_loc)
error:
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py in _raise_for_status(response)
130 Example:
--> 131
132 ```py
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/requests/models.py in raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
1022
HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/f15bc7606314c6fa957b4267bee417ee866c0b84/.gitattributes
During handling of the above exception, another exception occurred:
RepositoryNotFoundError Traceback (most recent call last)
/var/folders/43/m_k444pn2z7c6ygxj0y3_c4r0000gn/T/ipykernel_58721/1431136035.py in <module>
2 # model_loc = pathlib.Path("/Users/dlituiev/repos/stable-diffusion-v-1-4")
3 # model_loc = ("/Users/dlituiev/repos/stable-diffusion-v-1-4")
----> 4 pipe = StableDiffusionPipeline.from_pretrained(model_loc,
5 # local_files_only=True
6 )
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/diffusers/pipeline_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
152 Arguments:
153 save_directory (`str` or `os.PathLike`):
--> 154 Directory to which to save. Will be created if it doesn't exist.
155 """
156 self.save_config(save_directory)
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py in inner_f(*args, **kwargs)
91 f"Deprecated argument(s) used in '{f.__name__}':"
92 f" {', '.join(used_deprecated_args)}. Will not be supported from"
---> 93 f" version '{version}'."
94 )
95 if custom_message is not None:
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py in snapshot_download(repo_id, revision, repo_type, cache_dir, library_name, library_version, user_agent, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, allow_regex, ignore_regex, allow_patterns, ignore_patterns)
190 filename=repo_file,
191 repo_type=repo_type,
--> 192 revision=commit_hash,
193 cache_dir=cache_dir,
194 library_name=library_name,
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/file_download.py in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout)
1097 # In case of a redirect, save an extra redirect on the request.get call,
1098 # and ensure we download the exact atomic version even if it changed
-> 1099 # between the HEAD and the GET (unlikely, but hey).
1100 # Useful for lfs blobs that are stored on a CDN.
1101 if metadata.location != url:
/opt/anaconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py in _raise_for_status(response)
167
168 Example:
--> 169 ```py
170 import requests
171 from huggingface_hub.utils import hf_raise_for_status, HfHubHTTPError
RepositoryNotFoundError: 401 Client Error: Repository Not Found for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/f15bc7606314c6fa957b4267bee417ee866c0b84/.gitattributes. If the repo is private, make sure you are authenticated. (Request ID: 8mZ8qL_BSEtX-ODJTa8lW)
System Info
-
diffusersversion: 0.6.0 - Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyTorch version (GPU?): 1.13.0.dev20220904 (False)
- Huggingface_hub version: 0.10.1
- Transformers version: 4.23.1
- Using GPU in script?: M1 / not relevant
- Using distributed or parallel set-up in script?: in script / not relevant
upd: the local loading issue was due to a typo. Remote issue may be huggingface-wide
it looks like you can probably use the "use_auth_token" keyword argument, e.g.
StableDiffusionPipeline.from_pretrained(model_loc, use_auth_token="YOUR_TOKEN")
If this works, it probably means that huggingface is having trouble picking up the session state/login information you should have cached locally after logging in. If it doesn't work then there may be something more serious at play.
It should work by just doing:
from diffusers import StableDiffusionPipeline
model_loc = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_loc)
If you're logged in and use huggingface_hub >= 0.10.1 which @DSLituiev seems to do here.
To better find the error could we try the following steps and see at which step we're getting an error:
from diffusers import StableDiffusionPipeline
model_loc = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_loc, use_auth_token="<your-token>")
# <your-token> as can be found under huggingface.co/setttings/tokens
huggingface-cli login <your-token>
from diffusers import StableDiffusionPipeline
model_loc = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_loc, use_auth_token=True)
huggingface-cli login <your-token>
from diffusers import StableDiffusionPipeline
model_loc = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_loc)
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.