error in StableCascadeDecoderPipeline.from_pretrained
Describe the bug
When I used "StableCascadeDecoderPipeline.from_pretrained", the error accured "AttributeError: module diffusers.pipelines.stable_cascade has no attribute StableCascadeUNet". Can someone help me? @DN6 @
Reproduction
from diffusers import StableCascadeDecoderPipeline StableCascadeDecoderPipeline.from_pretrained("my file path")
Logs
No response
System Info
pip install git+https://github.com/kashif/diffusers.git@wuerstchen-v3
Who can help?
No response
@kashif
I also faced a similar error. Here is the code I used to solve it.
Replace pip install git+https://github.com/kashif/diffusers.git@wuerstchen-v3 with:
pip install git+https://github.com/kashif/diffusers.git@a3dc21385b7386beb3dab3a9845962ede6765887
and add the following after import torch:
torch.backends.cuda.enable_mem_efficient_sdp(False)
torch.backends.cuda.enable_flash_sdp(False)
Replace pip install git+https://github.com/kashif/diffusers.git@wuerstchen-v3 with: pip install git+https://github.com/kashif/diffusers.git@a3dc21385b7386beb3dab3a9845962ede6765887 --force
If you ara using main brach, this script works fine.
import torch
from diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline
prior = StableCascadePriorPipeline.from_pretrained(
"stabilityai/stable-cascade-prior",
torch_dtype=torch.bfloat16,
variant="bf16",
revision="refs/pr/2"
).to("cuda")
decoder = StableCascadeDecoderPipeline.from_pretrained(
"stabilityai/stable-cascade",
torch_dtype=torch.bfloat16,
variant="bf16",
revision="refs/pr/44"
).to("cuda")
num_images_per_prompt = 1
prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""
prior_output = prior(
prompt=prompt,
height=1024,
width=1024,
negative_prompt=negative_prompt,
guidance_scale=4.0,
num_images_per_prompt=num_images_per_prompt,
num_inference_steps=20
)
decoder_output = decoder(
image_embeddings=prior_output.image_embeddings,
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=0.0,
output_type="pil",
num_inference_steps=10
).images[0]
decoder_output.save("result.png")
Also getting this error trying to use StableCascadeCombinedPipeline on main branch, seems like something needs fixing. Also noticed in the Combined example it's loading the model "stabilityai/stable-cascade-combined" which doesn't exist on huggingface, so that might need fixing too. Switching to doing prior and decoder separately instead of combined, but would prefer the other way. In your example you're loading the bfloat16 version, but does this currently work with float16 and variant="fp16"? In the simplified example it shows the prior as torch.bfloat16 then the decoder as torch.float16 but not sure the best supported way for the least VRAM usage.. When I run the prior_output code the way @dai-ichiro shows, I'm also getting this error: File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py", line 1266, in call hidden_states = F.scaled_dot_product_attention( RuntimeError: cutlassF: no kernel found to launch!
My Environment is
Windows 11
CUDA 11.8 (RTX 4090)
Python 3.11
Torch 2.2.0
pip install torch==2.2.0+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install git+https://github.com/huggingface/diffusers
pip install accelerate transformers peft
This script also works fine.
import torch
from diffusers import StableCascadeDecoderPipeline, StableCascadePriorPipeline
prior = StableCascadePriorPipeline.from_pretrained(
"stabilityai/stable-cascade-prior",
torch_dtype=torch.bfloat16,
variant="bf16",
revision="refs/pr/2"
).to("cuda")
decoder = StableCascadeDecoderPipeline.from_pretrained(
"stabilityai/stable-cascade",
torch_dtype=torch.float16,
revision="refs/pr/44"
).to("cuda")
num_images_per_prompt = 1
prompt = "an image of a shiba inu, donning a spacesuit and helmet"
negative_prompt = ""
prior_output = prior(
prompt=prompt,
height=1024,
width=1024,
negative_prompt=negative_prompt,
guidance_scale=4.0,
num_images_per_prompt=num_images_per_prompt,
num_inference_steps=20
)
decoder_output = decoder(
image_embeddings=prior_output.image_embeddings.half(),
prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=0.0,
output_type="pil",
num_inference_steps=10
).images[0]
decoder_output.save("result.png")
hi all,
if i get: torch.cuda.OutOfMemoryError: CUDA out of memory.
is there any trick to be able to produce an image? thanks
After messing with it, I was able to fix the error cutlassF: no kernel found to launch! by upgrading the version of Torch.. Running on Colab Pro with preinstalled torch 2.1.0, so manually updated to latest torch 2.2.1 (+4min) and finally got Stable Cascade to work.. Might want to mention torch requirement in docs. It would be nice to figure out better optimizations on it, but at least with bf16 bfloat model it only takes up ~14GB VRAM which is tolerable, but still hit OOM a few times playing with settings. Thanks for the suggestion of loading the decoder as float16 with image_embeddings.half(), made that my switch when higher vram mode enabled. Then the other error I ran into was using callback_on_end_step for my progress bar, giving me this:
File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py", line 597, in __call__
latents = callback_outputs.pop("latents", latents)
AttributeError: 'NoneType' object has no attribute 'pop'
Which I was able to fix by adding if callback_outputs is not None: before that line in my fork, which I had to do with PIA too. That should probably be patched, simple enough.
Still need to work with it more and make adjustments, but I got it stable enough in DiffusionDeluxe.com app, feels like a milestone... Thanks.
After installed the latest update diffusers, I meet the error again and the official code doesnot work.
It seems there is a bug that the module name is StableCascadeUnet when using from_pretrained to load the model, but the class name in the package is StableCascadeUNet.
from diffusers.models.unets.unet_stable_cascade import StableCascadeUNet
@Skquark @dai-ichiro
I have no idea how run the files supplied. Do i just CMD in the stable diffusion Webui folder and past the code?
None of these solutions is working for me. One of them (not sure which) just broke everythign, and I had to re-install everything from scratch.
I have the same error.
Diffusers have been updated. Now, you don't need any of these solutions; just use the official code instead.
can we close this issue now?
I don't know if my issue actually is directly related here, or if it's the extension that is the issue.
Traceback (most recent call last):
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "C:\AI Stuff\A1111\stable-diffusion-webui\extensions\sdweb-easy-stablecascade-diffusers\scripts\easy_stablecascade_diffusers.py", line 41, in predict
prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16).to(device)
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1263, in from_pretrained
loaded_sub_model = load_sub_model(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 443, in load_sub_model
class_obj, class_candidates = get_class_obj_and_candidates(
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 347, in get_class_obj_and_candidates
class_obj = getattr(library, class_name)
File "C:\AI Stuff\A1111\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\import_utils.py", line 697, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers has no attribute StableCascadeUNet
I was using this earlier branch pip install git+https://github.com/kashif/diffusers.git@a3dc21385b7386beb3dab3a9845962ede6765887 which worked for a while, but now the model files on huggingface have been updated and I get AttributeError: module diffusers has no attribute StableCascadeUNet with this branch.
To fix this, I had to upgrade to the new diffusers version, as hashnimo mentioned. Note that just pip install might be insufficient. I had to uninstall diffusers first.
pip uninstall diffusers
pip install diffusers
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Closing as this seems resolved. Feel free to reopen if the issue persists.
the same problem:
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\Soft\stable-diffusion-webui-forge\extensions\sdweb-easy-stablecascade-diffusers\scripts\easy_stablecascade_diffusers.py", line 41, in predict
prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16).to(device)
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn
return fn(*args, **kwargs)
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1263, in from_pretrained
loaded_sub_model = load_sub_model(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 443, in load_sub_model
class_obj, class_candidates = get_class_obj_and_candidates(
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 347, in get_class_obj_and_candidates
class_obj = getattr(library, class_name)
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 697, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers has no attribute StableCascadeUNet
any suggestion?
the same problem:
File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "D:\Soft\stable-diffusion-webui-forge\extensions\sdweb-easy-stablecascade-diffusers\scripts\easy_stablecascade_diffusers.py", line 41, in predict prior = StableCascadePriorPipeline.from_pretrained("stabilityai/stable-cascade-prior", torch_dtype=torch.bfloat16).to(device) File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 119, in _inner_fn return fn(*args, **kwargs) File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1263, in from_pretrained loaded_sub_model = load_sub_model( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 443, in load_sub_model class_obj, class_candidates = get_class_obj_and_candidates( File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 347, in get_class_obj_and_candidates class_obj = getattr(library, class_name) File "D:\Soft\stable-diffusion-webui-forge\venv\lib\site-packages\diffusers\utils\import_utils.py", line 697, in __getattr__ raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module diffusers has no attribute StableCascadeUNetany suggestion?
Install this one instead. It doesn't have all these issues. Cascade 1-click installer
@juan-lalo What version of diffusers are you using? Please run diffusers-cli env and share the output.
@juan-lalo What version of diffusers are you using? Please run
diffusers-cli envand share the output.
I was use the stable diffusion with forge ui get it from this url https://github.com/lllyasviel/stable-diffusion-webui-forge
and i installed StableCascade from the extensions tab and used "install from url" option with this link https://github.com/benjamin-bertram/sdweb-easy-stablecascade-diffusers
I am using the latest version of Diffusers, which is 0.72.2 as of today.
I am trying to debug the project, and the error appears on the line shown in the next image:
@Creepybits i tried the "Cascade 1-click installer" and show me the next error:
"D:\stable-cascade-one-click-installer\venv\lib\site-packages\torch\cuda\amp\autocast_mode.py", line 34, in __init__ super().__init__( File "D:\stable-cascade-one-click-installer\venv\lib\site-packages\torch\amp\autocast_mode.py", line 306, in __init__ raise RuntimeError( RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
@juan-lalo What version of diffusers are you using? Please run
diffusers-cli envand share the output.I was use the stable diffusion with forge ui get it from this url https://github.com/lllyasviel/stable-diffusion-webui-forge
and i installed StableCascade from the extensions tab and used "install from url" option with this link https://github.com/benjamin-bertram/sdweb-easy-stablecascade-diffusers
I am using the latest version of Diffusers, which is 0.72.2 as of today.
I am trying to debug the project, and the error appears on the line shown in the next image:
@Creepybits i tried the "Cascade 1-click installer" and show me the next error:
"D:\stable-cascade-one-click-installer\venv\lib\site-packages\torch\cuda\amp\autocast_mode.py", line 34, in __init__ super().__init__( File "D:\stable-cascade-one-click-installer\venv\lib\site-packages\torch\amp\autocast_mode.py", line 306, in __init__ raise RuntimeError( RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
What graphics card are you using? You could try to update Cuda and see if that helps. I have 12.1 and I have no issues. https://developer.nvidia.com/cuda-12-1-1-download-archive
What graphics card are you using? You could try to update Cuda and see if that helps. I have 12.1 and I have no issues. https://developer.nvidia.com/cuda-12-1-1-download-archive
My graphics card is NVIDIA RTX 2060 for laptop, GPU-Z show me CUDA Activated
I am update NVIDIA CUDA to latest 12.4 version at today and it show me the same problem.
for test purposes only i commented the lines where there is validation for the "cuda" device, and the application is running but very slowly (generating one image in 10 minutes)
with forge version is optimized for low requirements
I've tried all the steps and Stable Cascade doesn't work in webui-forge.
"Error AttributeError: module diffusers has no attribute StableCascadeUNet"
My webui-forge is up to date: version: [f0.0.17v1.8.0rc-latest-276-g29be1da7]  •  python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: N/A  •  gradio: 3.41.2
- deleted venv folder
- installed https://developer.nvidia.com/cuda-12-1-1-download-archive
- uninstalled and installed diffusers
It is working in A1111, can anyone suggest how to make it working in Forge ?
Full error attached. err.txt
I've tried all the steps and Stable Cascade doesn't work in webui-forge.
"Error AttributeError: module diffusers has no attribute StableCascadeUNet"
My webui-forge is up to date: version: [f0.0.17v1.8.0rc-latest-276-g29be1da7]  •  python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: N/A  •  gradio: 3.41.2
- deleted venv folder
- installed https://developer.nvidia.com/cuda-12-1-1-download-archive
- uninstalled and installed diffusers
It is working in A1111, can anyone suggest how to make it working in Forge ?
Full error attached. err.txt
You can install and run Cascade separately: https://github.com/EtienneDosSantos/stable-cascade-one-click-installer
