Possible to use cauchy extension on CUDA 11.7?
I'm using lambdalabs GPUs which come with CUDA 11.7. I'm curious if there is a simple way to use the cauchy extension with 11.7?
ubuntu@129-146-51-55:~/diffwave-sashimi-sourcesep/extensions/cauchy$ python setup.py install --user
running install
running bdist_egg
running egg_info
writing cauchy_mult.egg-info/PKG-INFO
writing dependency_links to cauchy_mult.egg-info/dependency_links.txt
writing top-level names to cauchy_mult.egg-info/top_level.txt
/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
reading manifest file 'cauchy_mult.egg-info/SOURCES.txt'
writing manifest file 'cauchy_mult.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Traceback (most recent call last):
File "setup.py", line 20, in <module>
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install_lib.py", line 23, in run
self.build()
File "/usr/lib/python3.8/distutils/command/install_lib.py", line 109, in build
self.run_command('build_ext')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/usr/lib/python3/dist-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/usr/lib/python3.8/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 499, in build_extensions
_check_cuda_version(compiler_name, compiler_version)
File "/home/ubuntu/.local/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 386, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.6) mismatches the version that was used to compile
PyTorch (11.7). Please make sure to use the same CUDA versions.
Is the CUDA on the machines linked to 11.7? What does nvcc --version give? If it's not 11.7, you'll have to link $CUDA_HOME to the correct version (e.g. symlink /usr/local/cuda to /usr/local/cuda-11.7
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_Mar__8_18:18:20_PST_2022
Cuda compilation tools, release 11.6, V11.6.124
Build cuda_11.6.r11.6/compiler.31057947_0
I don't actually have /usr/local/cuda or /usr/local/cuda-11.7, nor do I have a CUDA_HOME variable. What are my next steps?
GPT4 suggested I download and install CUDA-11.7 since that's the version my pytorch requires but that seems excessive. Is there a simpler workaround?
You should look at where nvcc is (type -a nvcc or where nvcc perhaps) which should be inside the cuda folder. Go one directory up and see if other cuda versions are in there. The original cuda folder is usually a symlink to another directory (usually called cuda-11.6 in your case) and the best case scenario is that there is a cuda-11.7 that you can symlink it to.
/usr/bin/nvcc
:(
Any thoughts? Thank you
I do have a folder called '/usr/lib/cuda' with the following:
/usr/lib/cuda/
/usr/lib/cuda/nvvm
/usr/lib/cuda/nvvm/libdevice
/usr/lib/cuda/include
/usr/lib/cuda/include/cuda.h
/usr/lib/cuda/bin
/usr/lib/cuda/lib64
Hmm okay I'm not familiar with this setup. It may be a nonstandard installation that will be hard to make work. Are there any other folders named cuda-something under /usr/lib/? Or if you do a find under the root?