[Bug]: No "Accelerate with OpenVINO" Script
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
There is no "Accelerate with OpenVINO" option.
It seems the Script had an error loading.
*** Error loading script: openvino_accelerate.py
Traceback (most recent call last):
File "A:\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "A:\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "A:\stable-diffusion-webui\scripts\openvino_accelerate.py", line 47, in <module>
from diffusers import (
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import HfFolder, cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (A:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\__init__.py)
Notes: https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon
PyTorch doesn't support torch.compile officially on windows yet. Launching torch-install.bat installs PyTorch and enables torch.compile for OpenVINO backend.
- There is no
torch-install.bat
I want to run with support for my Intel Arc A770.
Steps to reproduce the problem
Windows 11 Professional N
- Install Git
- Install Python
3.10.6 - Run Command Prompt as Administrator
3.1.
git clone https://github.com/openvinotoolkit/stable-diffusion-webui.git3.2.cd stable-diffusion-webui3.3.webui-user.bat - See that there is no Accelerate with OpenVINO option in Scripts
- See console output that it has an error loading
openvino_accelerate.py
What should have happened?
There should have been no error loading the script and I should have been able to select it from the dropdown.
Sysinfo
What browsers do you use to access the UI ?
Microsoft Edge
Console logs
*** Error loading script: openvino_accelerate.py
Traceback (most recent call last):
File "A:\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "A:\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "A:\stable-diffusion-webui\scripts\openvino_accelerate.py", line 47, in <module>
from diffusers import (
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\__init__.py", line 5, in <module>
from .utils import (
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
from .dynamic_modules_utils import get_class_from_dynamic_module
File "A:\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
from huggingface_hub import HfFolder, cached_download, hf_hub_download, model_info
ImportError: cannot import name 'cached_download' from 'huggingface_hub' (A:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\__init__.py)
Full log:
https://pastebin.com/dUgFPhTE
Additional information
I had Python 3.12 installed prior, but uninstalled before installing Python 3.10.6
Sorry, seems to be a duplicate of https://github.com/openvinotoolkit/stable-diffusion-webui/issues/122
I found a workaround from this other issue somewhere else.
First, stop the server if you have it running.
- Add
huggingface_hub<0.26.0torequirements_versions.txt - Activate your
venv(On windows it'svenv\Scripts\activate) - Run
pip install -r requirements_versions.txt - Run
webui-user.bat
Now you should have access to the Acceleration with OpenVINO
I don't know how long it is expected to take with GPU acceleration
Loading weights [6ce0161689] from A:\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
OpenVINO Script: created model from config : A:\stable-diffusion-webui\configs\v1-inference.yaml
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:26<00:00, 1.31s/it]
OpenVINO Script: loading upscaling model: stabilityai/sd-x2-latent-upscaler
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 9.64it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [01:58<00:00, 11.81s/it]
{}
Interesting, this did not fix it for me on Ubuntu 24.10. I am weirdly also getting an error message about nVIDIA CUDA drivers not being installed, which I am not seeing in your output. Granted you are on Windows but still kind of interesting. It also keeps complaining about xformers not being installed, even though it is. I think we might need an updated Wiki or guide as I also couldn't get it to work with Python virtual environments. I had to resort to using Conda to get it to download Torch. It just wouldn't using the Python venv.
Also for further context: the webui loads fine and it can generate using the cpu, it is just that the OpenVINO scripts don´t seem to be working right now. After putting the huggingface_hub<0.26.0 in requirements_versions.txt and letting pip install all the packes from this text file, inside of the Conda virtual environment, it still doesn't work. I suspect there are some minor differences between Windows and Linux that are causing this. I will keep trying to get it to work but I would really appreciate if the developers could weigh in on this.
I don't know if this is feasible for you but I abandoned exploring this project since it had some issues (or I didn't know how-to) with custom models from Civit.ai.
Instead I looked up the SD.Next which has support for Intel Arc with some installation steps found here.
It is easy to use and works with custom models from Civit.ai on my Intel Arc A770. Also it was a lot faster than generation than this project. It still uses automatic but has modified front end.
@SauceChord thanks for mentioning this. I was ready to abandon it mysrlf as I ran into all kinds of issues before which I finally overcame yesterday. Good to know that some CivitAI models are buggy, I will have to keep that in mind. Sadly I cannot switch to SD.Next as I have an Intel Iris Xe gpu. I wish that was supported. Thanks for letting me know though!
@proairface - I also have an Intel Iris Xe gpu and fighting this. What did you do to overcome this issue yesterday? I never could find the torch-install file to run in the instructions and the script doesn't show up for me in the dropdown.
@becky-irisbluetech I'm on Ubuntu 24.10 so I will give you a quick summary of what I did. I am using Conda instead of the Python venv virtual environment commands so what I did will be a bit different from the wiki.
- Clone this Git repository as per the instructions/wiki
- Open a terminal in this new directory or cd into it
- Create a new isolated python environment with in this case Python 3.11 with the same name from the wiki "sd_env": conda create --name sd_env python=3.11
- Start working inside of this new virtual environment: conda activate sd_env
- Continue with the commands from the wiki, while still inside of this Conda virtual environment in your terminal: export PYTORCH_TRACING_MODE=TORCHFX export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half" ./webui.sh
This should do the trick without any issues. Once you're done in this virtual environment, type conda deactivate to get out of this venv.
@proairface Could you confirm that you are using iGPU with this openvino_accelerate.py script? I'm asking because only CPU is shown for me.
Here is my inxi -G output
Graphics:
Device-1: Intel Raptor Lake-P [Iris Xe Graphics] driver: i915 v: kernel
Device-2: Realtek Integrated_Webcam_FHD driver: uvcvideo type: USB
Display: x11 server: X.Org v: 21.1.14 with: Xwayland v: 24.1.4 driver: X:
loaded: modesetting dri: iris gpu: i915 resolution: 1: 1920x1080~60Hz
2: 1920x1080~60Hz
API: EGL v: 1.5 drivers: iris,swrast platforms: gbm,x11,surfaceless,device
API: OpenGL v: 4.6 compat-v: 4.5 vendor: intel mesa v: 24.2.8-arch1.1
renderer: Mesa Intel Graphics (RPL-U)
API: Vulkan v: 1.4.303 drivers: N/A surfaces: xcb,xlib
Info: Tools: api: clinfo, eglinfo, glxinfo, vulkaninfo
de: kscreen-console,kscreen-doctor gpu: gputop, intel_gpu_top, lsgpu
wl: wayland-info x11: xdpyinfo, xprop, xrandr
@jillesmc This is my output from inxi -G on my Ubuntu 24.10 laptop:
Graphics: Device-1: Intel Meteor Lake-P [Intel Arc Graphics] driver: i915 v: kernel Device-2: Luxvisions Innotech Integrated Camera driver: uvcvideo type: USB Display: wayland server: X.Org v: 24.1.2 with: Xwayland v: 24.1.2 compositor: gnome-shell v: 47.0 driver: dri: iris gpu: i915 resolution: 3840x2400~60Hz API: EGL v: 1.5 drivers: iris,swrast platforms: gbm,wayland,x11,surfaceless,device API: OpenGL v: 4.6 compat-v: 4.5 vendor: intel mesa v: 24.2.3-1ubuntu1 renderer: Mesa Intel Arc Graphics (MTL)
I don't know where that screenshot you shared is from. Is that somewhere under the Settings menu in the Stable Diffusion webui?
This is what worked for me in Windows 11:
- Add huggingface_hub==0.20.2 to requirements_versions.txt
- Activate your venv (On windows it's venv\Scripts\activate)
- Run pip install -r requirements_versions.txt
- Run webui-user.bat
When script enabled, it's much slower.
| CPU | Script + CPU | Script + GPU | |
|---|---|---|---|
| Speed | 5.02 s/it | 8.58 s/it | 20 s/it |
| One Pic Time | 1min 40s | 2min 48s | 6min 40s |
| RAM | 26G | 24G | 22G |
| GPU RAM | 0.5G | 0.5G | 4.2G |