[bug]: Installation woes
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
No response
What happened?
Spent the last two days on this with over 12 hours of trying... I am new to Anaconda and a novice user so keep this in mind. I have tried the installation 6 different times and have been troubleshooting in-between. These are the following errors I receive every time during the installation of 2.3.0:
- Python - cmd prompt lists my version of python as 3.10.9. Anaconda Navigator shows the base (root) python as 3.9.16 but I still receive this error: "Python 3.10.9 (you have 3.10.9)"
- Pytorch - I have tried every possible install I can think of - through cmd prompt, through anaconda base, I even tried making a new InvokeAI environment and installing this prior to running the installation.bat. I've tried v1.13.1 and the new v2.0. Nothing will work as I get this error message every time. "1.13.1+cu117 with CUDA 1107 (you have 1.13.1+cpu)"
- xformers- I installed them in the base in anaconda. Why cant I use them in InvokeAI?
- models - have tried (r)recommeded models and (a)ll models and (c)ustomized list and they all fail to download and this crashes the installer. The linked addresses seem to be dead links? If the installer would just create the necessary directory, I could just copy .ckpt files over because I have them all already for Automatic1111.... I tried (s)kip this step and at least I can continue with GFPGAN etc...
- I have updated pip to v 23.1 (verified in command prompt and anaconda base) but I get the error "A new release of pip available: 22.3.1 -> 23.0" every time
- when the installation is complete the window automatically closes and I don't get the opportunity to look over the process. Please do not have it auto-close the window,
Full Error Message: WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.13.1+cpu) Python 3.10.9 (you have 3.10.9) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details
When I try to run invoke.bat -> webui it simply crashes and the window closes. Any help getting this going would be greatly appreciated.
My GPU drivers are up to date and Anaconda is up to date. running windows 10 - AMD Ryzen 9 3900X 12-Core, NVIDIA GeForce RTX 3080Ti, 64 Gb Ram
Screenshots
No response
Additional context
No response
Contact Details
No response
These issues are fixed in the latest release candidate. Please download the most recent installer and try again: https://github.com/invoke-ai/InvokeAI/releases/download/v2.3.0-rc6/InvokeAI-installer-v2.3.0-rc6.zip.
To be sure that the reinstall is as complete as possible, please delete the .venv directory (which may be hidden) from within your invokeai runtime directory
@transitgrave v2.3.0 has been released - could you please confirm whether it fixes your issue? thank you
Same issue when installing v2.3.0 latest. Windows 10. nVidia 3090. Also, my installation didn't create a models.yaml file. I had to do that manually, and populate it manually, and manually copy the 1.5 model into the models\ldm\stable-diffusion-v1 directory. After all this, I'm presented with a blank screen.
Starting the InvokeAI browser-based UI.. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 1.13.1+cu117 with CUDA 1107 (you have 1.13.1+cpu) Python 3.10.9 (you have 3.10.9) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details
I had previously run v2.2.5 without issue.
I did notice that the installation here updated my Python site packages at my default Python location. I would have thought the install only messed with the .venv directory, but maybe this is expected.
I did notice that the installation here updated my Python site packages at my default Python location. I would have thought the install only messed with the .venv directory, but maybe this is expected.
That is definitely not expected.
- Did you have a virtual environment activated while running the installer?
- Did you try to upgrade an existing install, or perform a clean installation?
- Could you please try a clean install in a brand new location, and paste complete terminal output, starting with the line where you call
install.bat?
Thank you
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
I'm encountering the same issue. Is there any update here?
I was using 2.2.x a couple of months ago without any issue.
I'm using InvokeAI 2.3.3rc7 now
$ .venv/bin/python -m xformers.info
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.0+cu118 with CUDA 1108 (you have 1.13.1+cu117)
Python 3.10.10 (you have 3.10.6)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
xFormers 0.0.18
memory_efficient_attention.cutlassF: unavailable
memory_efficient_attention.cutlassB: unavailable
memory_efficient_attention.flshattF: unavailable
memory_efficient_attention.flshattB: unavailable
memory_efficient_attention.smallkF: unavailable
memory_efficient_attention.smallkB: unavailable
memory_efficient_attention.tritonflashattF: available
memory_efficient_attention.tritonflashattB: available
indexing.scaled_index_addF: unavailable
indexing.scaled_index_addB: unavailable
indexing.index_select: unavailable
swiglu.dual_gemm_silu: unavailable
swiglu.gemm_fused_operand_sum: unavailable
swiglu.fused.p.cpp: not built
is_triton_available: True
is_functorch_available: False
pytorch.version: 1.13.1+cu117
pytorch.cuda: available
gpu.compute_capability: 5.2
gpu.name: Tesla M40 24GB
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.10
build.torch_version: 2.0.0+cu118
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.18
source.privacy: open source
NVIDIA-SMI 515.86.01 Driver Version: 515.86.01 CUDA Version: 11.7