Running on CPU now! Make sure your PyTorch version matches your CUDA
I have a prolem with running CUDA on GPU. When I'm runnig command:
python inference_codeformer.py --bg_upsampler realesrgan --face_upsample -w 0.7 --input_path G:\AI\CodeFormer\results\test1.jpg
i'm getting:
inference_codeformer.py:49: RuntimeWarning: Running on CPU now! Make sure your PyTorch version matches your CUDA.The unoptimized RealESRGAN is slow on CPU. If you want to disable it, please remove --bg_upsampler and --face_upsample in command.
I uninstalled CUDA 12 and installed 11.7, I reinstalled pytorch with:
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
but no change.
Can somebody sugest some solution?
Enviroment:
Windows 11, Desktop, GTX 3070
Python 3.10.9
pytorch 1.13.1 py3.9_cuda11.7_cudnn8_0 pytorch
pytorch-cuda 11.7 h67b0de4_1 pytorch
pytorch-mutex 1.0 cuda pytorch
torchaudio 0.13.1 pypi_0 pypi
torchvision 0.14.1 pypi_0 pypi
I have same issue
REMOVE ALL OF THEM (pytorch, torchvision, torchaudio, cudatoolkit)
and run: conda install pytorch=1.11.0 torchvision=0.12 torchaudio=0.11 cudatoolkit=11.5 -c pytorch -c conda-forge
I tested myself and confirm this will work for me so hope this work for you too
Thank You. I did:
conda remove pytorch torchvision torchaudio cudatoolkit
conda install pytorch=1.11.0 torchvision=0.12 torchaudio=0.11 cudatoolkit=11.5 -c pytorch -c conda-forge
all was installed properly but I'm still getting
inference_codeformer.py:49: RuntimeWarning: Running on CPU now! Make sure your PyTorch version matches your CUDA.The unoptimized RealESRGAN is slow on CPU. If you want to disable it, please remove `--bg_upsampler` and `--face_upsample` in command.
warnings.warn('Running on CPU now! Make sure your PyTorch version matches your CUDA.'
I do not have cuda 12 installed, I have 11.5 installed but codeformer is still runing on CPU.
do you install CUDA Toolkit before? https://developer.nvidia.com/cuda-11.3.0-download-archive
also if you're runing dual display card (example : some notebook run intel display card + nvidia ), you better to disable non- nvidia one
-> new nvidia graphics drivers cause problems going back to older version <- -> nowe sterowniki grafiki nvidia powodują problemy powrót do starszej wersji <-
ubuntu-drivers devices CodeFormer$ sudo apt autoremove nvidia* --purge CodeFormer$ sudo /usr/bin/nvidia-uninstall CodeFormer$ sudo apt install nvidia-driver-515 <---- działa , works , OK reboot <---
@garyakimoto's solution worked for me on Win 11 and Cuda 11.8.
Just had to install pyyaml manually: conda install pyyaml
REMOVE ALL OF THEM (pytorch, torchvision, torchaudio, cudatoolkit)
and run: conda install pytorch=1.11.0 torchvision=0.12 torchaudio=0.11 cudatoolkit=11.5 -c pytorch -c conda-forge
I tested myself and confirm this will work for me so hope this work for you too
hi, so i tried this method. managed to install cuda 11.5, pytorch is also correct. but codeformer still uses cpu only, wont use cuda. on windows 10 with a gtx 1080
UPDATE:- checking for if torch is using cuda gpu is successful in python, however in conda, it doesnt use the gpu. hi so i installed numba and tried getting the info this is what it says, i tried doing a clean install of everything on a new virtual server but i still get the same error: `Hardware Information Machine : AMD64 CPU Name : znver1 CPU Count : 16 Number of accessible CPUs : 16 List of accessible CPUs cores : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 CFS Restrictions (CPUs worth of runtime) : None
CPU Features : 64bit adx aes avx avx2 bmi bmi2 clflushopt clzero cmov cx16 cx8 f16c fma fsgsbase fxsr lzcnt mmx movbe mwaitx pclmul popcnt prfchw rdrnd rdseed sahf sha sse sse2 sse3 sse4.1 sse4.2 sse4a ssse3 xsave xsavec xsaveopt xsaves
Memory Total (MB) : 32720 Memory Available (MB) : 18803
OS Information Platform Name : Windows-10-10.0.18362-SP0 Platform Release : 10 OS Name : Windows OS Version : 10.0.18362 OS Specific Version : 10 10.0.18362 SP0 Multiprocessor Free Libc Version : ?
Python Information Python Compiler : MSC v.1916 64 bit (AMD64) Python Implementation : CPython Python Version : 3.10.9 Python Locale : en_IN.cp1252
Numba Toolchain Versions Numba Version : 0.56.4 llvmlite Version : 0.39.1
LLVM Information LLVM Version : 11.1.0
CUDA Information CUDA Device Initialized : True CUDA Driver Version : 11.7 CUDA Runtime Version : 11.7 CUDA NVIDIA Bindings Available : False CUDA NVIDIA Bindings In Use : False CUDA Detect Output: Found 1 CUDA devices id 0 b'NVIDIA GeForce GTX 1080' [SUPPORTED] Compute Capability: 6.1 PCI Device ID: 0 PCI Bus ID: 12 UUID: GPU-446ba761-7d06-bde9-66d7-10b96089d724 Watchdog: Enabled Compute Mode: WDDM FP32/FP64 Performance Ratio: 32 Summary: 1/1 devices are supported
CUDA Libraries Test Output: Finding nvvm from CUDA_HOME named nvvm64_40_0.dll trying to open library... ok Finding cudart from CUDA_HOME named cudart64_110.dll trying to open library... ok Finding cudadevrt from CUDA_HOME named cudadevrt.lib ERROR: failed to find cudadevrt: cudadevrt.lib not found Finding libdevice from CUDA_HOME trying to open library... ok
NumPy Information NumPy Version : 1.23.5 NumPy Supported SIMD features : ('MMX', 'SSE', 'SSE2', 'SSE3', 'SSSE3', 'SSE41', 'POPCNT', 'SSE42', 'AVX', 'F16C', 'FMA3', 'AVX2') NumPy Supported SIMD dispatch : ('SSSE3', 'SSE41', 'POPCNT', 'SSE42', 'AVX', 'F16C', 'FMA3', 'AVX2', 'AVX512F', 'AVX512CD', 'AVX512_SKX', 'AVX512_CLX', 'AVX512_CNL') NumPy Supported SIMD baseline : ('SSE', 'SSE2', 'SSE3') NumPy AVX512_SKX support detected : False
SVML Information SVML State, config.USING_SVML : True SVML Library Loaded : True llvmlite Using SVML Patched LLVM : True SVML Operational : True
Threading Layer Information TBB Threading Layer Available : True +-->TBB imported successfully. OpenMP Threading Layer Available : True +-->Vendor: MS Workqueue Threading Layer Available : True +-->Workqueue imported successfully.
Numba Environment Variable Information None found.
Conda Information Conda Build : 3.24.0 Conda Env : 23.3.1 Conda Platform : win-64 Conda Python Version : 3.10.9.final.0 Conda Root Writable : True`
you can run this script to install right version for cuda and torch
pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
the version docments look below url
https://pytorch.org/get-started/previous-versions/
In case it means anything, I only get this error message when using upsampling i.e I don't get it if I omit --face_upsample. However, in both cases the speed seems about the same, so I am unsure which I am actually running (gpu / cpu) because there are conflicting messages. Regardless, upsample doesn't seem to work anyway in that it still outputs 512x512 images. There is a lot of info missing from the install instructions, and time moves on so I guess that is why this is so problematic. Still looking for a solution!
For anybody who haven't figure it out yet make sure you go to pytorch website for guide.
Check when was he latest update in codeformer and install the version of pytorch that was release before the latest update in codeformer.
In my case pytorch version 2.0.0 works for me as I have cuda 11.7
https://pytorch.org/
for older version of pytorch go to
https://pytorch.org/get-started/previous-versions/
Keep in mind you don't have to uninstall your current version of cuda, you can install the one that you need with one that you have. You just have to get windows to point to the one that you need by editing Windows Environment Variables. You can do the same with CUDNN.
Experiencing the same issue. I am running CUDA 12.8 and don't want to install a lower version. Upgrading the Torch build to Nightly build solved the issue. Reference: https://pytorch.org/
The nightly build version name will break the mechanism of checking Torch version in the code. You can simply set IS_HIGH_VERSION = True to work around it.