diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

cached models do not load when HF_HUB_OFFLINE=1

Open keturn opened this issue 3 years ago • 3 comments

Describe the bug

The model hub seems to be having a bad time at the moment. That's a bummer, but I expect my application should still work fine in offline mode. But it didn't automatically fall back to offline mode, and when I manually set the HF_HUB_OFFLINE environment variable and tried again, it still didn't work.

Reproduction

crash huggingface servers, then set environment variable HF_HUB_OFFLINE=1 and run StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", revision="fp16")

(plz don't crash the huggingface servers again. hopefully you can find another way to test.)

Logs

Traceback (most recent call last):
  File "InvokeAI/ldm/invoke/model_cache.py", line 88, in get_model
    requested_model, width, height, hash = self._load_model(model_name)
  File "InvokeAI/ldm/invoke/model_cache.py", line 224, in _load_model
    model, width, height, model_hash = self._load_diffusers_model(mconfig)
  File "InvokeAI/ldm/invoke/model_cache.py", line 364, in _load_diffusers_model
    pipeline = StableDiffusionGeneratorPipeline.from_pretrained(
  File "lib/python3.10/site-packages/diffusers/pipeline_utils.py", line 486, in from_pretrained
    info = model_info(
  File "lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn
    return fn(*args, **kwargs)
  File "lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 1234, in model_info
    hf_raise_for_status(r)
  File "lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 280, in hf_raise_for_status
    raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/models/runwayml/stable-diffusion-v1-5/revision/fp16

System Info

  • diffusers version: 0.10.2
  • Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35
  • Python version: 3.10.6
  • PyTorch version (GPU?): 1.13.0+cu117 (True)
  • Huggingface_hub version: 0.11.1
  • Transformers version: 4.25.1
  • Using GPU in script?: yes, nvidia RTX 3060
  • Using distributed or parallel set-up in script?: no

keturn avatar Dec 15 '22 17:12 keturn

Hey @keturn,

you're right, we indeed had some trouble with the HF servers the last days: https://status.huggingface.co/

In short we should make sure here that whenever files have been downloaded and cached, that they can then automatically be loaded if HF_HUB_OFFLINE=1. Will check!

patrickvonplaten avatar Dec 19 '22 16:12 patrickvonplaten

Ok yes I can reproduce this! Essentially our HF_HUB_OFFLINE=1 variable doesn't work correctly. For now you can circumvent this by passing local_files_only=True to the from_pretrained(...) method, but we should indeed fix this.

So in short, if you have downloaded & cached a model, currently the following will fail when not having access to the internet:

export HF_HUB_OFFLINE=1
import torch
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
    
image.save("astronaut_rides_horse.png")

This should be corrected. However, the following can be done for now to circumvent this:

import torch
from diffusers import StableDiffusionPipeline

model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"

pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, local_files_only=True)
pipe = pipe.to(device)

prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]  
    
image.save("astronaut_rides_horse.png")

patrickvonplaten avatar Dec 19 '22 16:12 patrickvonplaten

https://github.com/huggingface/diffusers/pull/1767 this should solve it

patrickvonplaten avatar Dec 19 '22 16:12 patrickvonplaten

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Jan 15 '23 15:01 github-actions[bot]