websepia

Results 7 comments of websepia

Most likely, the same issue here: python gradio_app/app.py RUNNING ON: cuda Loading pipeline components...: 0%| | 0/6 [00:00

> refer other issue fix this #58 update diffusers to another commit id `pip install git+https://github.com/kashif/diffusers.git@a3dc21385b7386beb3dab3a9845962ede6765887 --force` > > have a try. Works for me. Thanks!

> after looking through some other bugs, found this. It fixed it for me. The documentation on hugging face for the version to install must be newer, but broken. >...

I reverted back to 84b801d8, but this error still exists! Steps: 1. git reflog ff0e79fa (HEAD -> main, origin/main, origin/HEAD) HEAD@{21}: pull: Fast-forward 84b801d8 HEAD@{22}: clone: from https://github.com/invoke-ai/InvokeAI 2. git...

invokeai.init has a parameter --always_use_cpu, but it seems not working for my case. # InvokeAI initialization file # This is the InvokeAI initialization file, which contains command-line default values. #...

As a workaround, in order to make it work, I add 'cpu' to /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/torch/serialization.py like below. ``` def load( f: FILE_LIKE, map_location: MAP_LOCATION = 'cpu', pickle_module: Any = None, *,...

Another workaround is to add the code snippet to 786 line into model_manager: /Users/xixili/AI/InvokeAI/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py ``` if Globals.always_use_cpu is True: checkpoint = torch.load(model_path, map_location=torch.device('cpu')) else: checkpoint = torch.load(model_path) ```