COLAB - CLIP_Guided_Stable_diffusion_with_diffusers.ipynb BUG
Describe the bug
FROM
COLAB - version
https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion
| Example | Description | Code Example | Colab | Author |
|---|---|---|---|---|
| CLIP Guided Stable Diffusion | Doing CLIP guidance for text to image generation with Stable Diffusion | CLIP Guided Stable Diffusion | Â | Suraj Patil |
Example Description Code Example Colab Author CLIP Guided Stable Diffusion Doing CLIP guidance for text to image generation with Stable Diffusion CLIP Guided Stable Diffusion Open In Colab Suraj Patil
CODE
#@title Load the pipeline
import torch
from PIL import Image
from diffusers import LMSDiscreteScheduler, DiffusionPipeline, PNDMScheduler
from transformers import CLIPFeatureExtractor, CLIPModel
model_id = "CompVis/stable-diffusion-v1-4" #@param {type: "string"}
clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" #@param ["laion/CLIP-ViT-B-32-laion2B-s34B-b79K", "laion/CLIP-ViT-L-14-laion2B-s32B-b82K", "laion/CLIP-ViT-H-14-laion2B-s32B-b79K", "laion/CLIP-ViT-g-14-laion2B-s12B-b42K", "openai/clip-vit-base-patch32", "openai/clip-vit-base-patch16", "openai/clip-vit-large-patch14"] {allow-input: true}
scheduler = "plms" #@param ['plms', 'lms']
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
if scheduler == "lms":
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear")
else:
scheduler = PNDMScheduler.from_config(model_id, subfolder="scheduler")
feature_extractor = CLIPFeatureExtractor.from_pretrained(clip_model_id)
clip_model = CLIPModel.from_pretrained(clip_model_id, torch_dtype=torch.float16)
guided_pipeline = DiffusionPipeline.from_pretrained(
model_id,
custom_pipeline="clip_guided_stable_diffusion",
clip_model=clip_model,
feature_extractor=feature_extractor,
scheduler=scheduler,
revision="fp16",
torch_dtype=torch.float16,
)
guided_pipeline = guided_pipeline.to("cuda")
ERROR
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
212 try:
--> 213 response.raise_for_status()
214 except HTTPError as e:
7 frames
[/usr/local/lib/python3.7/dist-packages/requests/models.py](https://localhost:8080/#) in raise_for_status(self)
940 if http_error_msg:
--> 941 raise HTTPError(http_error_msg, response=self)
942
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/CompVis/stable-diffusion-v1-4/resolve/main/scheduler/scheduler_config.json
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
233 subfolder=subfolder,
--> 234 revision=revision,
235 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, use_auth_token, local_files_only, legacy_cache_layout)
1056 proxies=proxies,
-> 1057 timeout=etag_timeout,
1058 )
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/file_download.py](https://localhost:8080/#) in get_hf_file_metadata(url, use_auth_token, proxies, timeout)
1358 )
-> 1359 hf_raise_for_status(r)
1360
[/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)
253 # as well (request id and/or server error message)
--> 254 raise HfHubHTTPError(str(HTTPError), response=response) from e
255
HfHubHTTPError: <class 'requests.exceptions.HTTPError'> (Request ID: jZS5obZMrJhLaKgaR-re8)
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
[<ipython-input-7-da13ad99b2d8>](https://localhost:8080/#) in <module>
25 scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear")
26 else:
---> 27 scheduler = PNDMScheduler.from_config(model_id, subfolder="scheduler")
28
29
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in from_config(cls, pretrained_model_name_or_path, return_unused_kwargs, **kwargs)
159
160 """
--> 161 config_dict = cls.get_config_dict(pretrained_model_name_or_path=pretrained_model_name_or_path, **kwargs)
162 init_dict, unused_kwargs = cls.extract_init_dict(config_dict, **kwargs)
163
[/usr/local/lib/python3.7/dist-packages/diffusers/configuration_utils.py](https://localhost:8080/#) in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
254 except HTTPError as err:
255 raise EnvironmentError(
--> 256 "There was a specific connection error when trying to load"
257 f" {pretrained_model_name_or_path}:\n{err}"
258 )
OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:
<class 'requests.exceptions.HTTPError'> (Request ID: jZS5obZMrJhLaKgaR-re8)
Reproduction
No response
Logs
No response
System Info
#@title Instal dependancies !pip install -qqq diffusers==0.4.1 transformers ftfy gradio
It's a 403. Make sure to accept the TOS on the huggingface page for 1.4
devs, I would recommend adding a custom error message whenever TOS wasn't accepted
@patil-suraj could you take a look here? :-)
@patil-suraj Awesome work I love everything with colab!
Hey @stromal as said by @dblunk88 it looks like a auth issue. If you are using the colab then make sure to run Login cell first before loading the pipeline. If you are running locally you can also do
huggingface-cli login
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.