builder icon indicating copy to clipboard operation
builder copied to clipboard

Decomission CUDA 10.2 support

Open atalman opened this issue 3 years ago • 2 comments

We keep cuda 10.2 only for pypi releases https://pypi.org/project/torch/#files, which has 850mb size limit. Here is the pypi support ticket we opened last year to increase the limit. Here is related discussion Issue 56055

We should discontinue cuda 10. It will likely require pypi wheel without cuda support, but we can't propagate this cuda 10 situation indefinitely. We could print an error message when users attempt to initialize cuda context with their pypi installs. The problem though is users who don't get pytorch directly, but get it through various dependency resolution methods, like torch listed in requirements.txt.

possible solution to explore: dynamically link with CUDA so pytorch is distribuded without CUDA on pypi.org Implement post install script to get CUDA using curl and install it after pypi install is complete:

Subtask for decomissioning:

  • [x] #1134
  • [x] #1133
  • [x] #1132
  • [x] #1154

atalman avatar May 02 '22 23:05 atalman

We should proceed with the following schedule on dropping support for 10.2 :

  • Release 1.12 - announce
  • Release 1.13 - actual drop

atalman avatar May 04 '22 16:05 atalman

One solution of dropping CUDA 10.2 that was discussed with @ptrblck is to link with CUDA dynamically rather then statically as done now. Then we download CUDA runtime on the install machine similar to this: https://docs.nvidia.com/cuda/cuda-quick-start-guide/index.html#pip-wheels-installation-linux

@malfet @ngimel @ezyang Please let me know if we have anything against us trying to implement dynamic linking of CUDA 11+ ? This way we can continue to distribute pytorch on pypi and decomission CUDA 10.2

atalman avatar Jun 06 '22 14:06 atalman