Instructions for installing PyTorch
- [x] I have searched the issues of this repo and believe that this is not a duplicate.
Issue
As mentioned in issue https://github.com/python-poetry/poetry/issues/4231 there is some confusion around installing PyTorch with CUDA but it is now somewhat resolved. It still requires a few steps, and all options have pretty serious flaws. Below are two options that 'worked' for me, on Poetry version 1.2.0.
Option 1 - wheel URLs for a specific platform
- You will need to pick the specific wheels you want. These are listed here: https://download.pytorch.org/whl/torch_stable.html. E.g. if you want CUDA 11.6, Python 3.10 and Windows, search that page for
cu116-cp310-cp310-win_amd64.whlto see the matches fortorch,torchaudioandtorchvision - In your
pyproject.tomlfile add the URLs like:
[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { url = "https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchaudio = { url = "https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchvision = { url = "https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl"}
- Run
poetry update. It will download a lot of data (many GB) and take quite some time. And this doesn't seem to cache reliably (at least, I've waited 30 minutes+ at 56 Mbps three separate times while troubleshooting this, for the exact same wheels)
Note that each subsequent poetry update will do another huge download and you'll see this message:
• Updating torch (1.12.1+cu116 -> 1.12.1+cu116 https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
• Updating torchaudio (0.12.1+cu116 -> 0.12.1+cu116 https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
• Updating torchvision (0.13.1+cu116 -> 0.13.1+cu116 https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl)
Option 2 - alternate source
[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { version = "1.12.1", source="torch"}
torchaudio = { version = "0.12.1", source="torch"}
torchvision = { version = "0.13.1", source="torch"}
[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu116"
secondary = true
This seems to have worked (although I already had the packages installed) but it reports errors like Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/, but I think they get installed anyway (maybe a better message would be "Can't access pillow at 'https://download.pytorch.org/whl/cu116', falling back to pypi")
Also, if you later go on to do, say poetry add pandas (a completely unrelated library) you'll get a wall of messages like:
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/numpy/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/requests/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/typing-extensions/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/certifi/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/urllib3/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/idna/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/charset-normalizer/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/
This happens with or without secondary = true in the source config.
Maintainers: please feel free to edit the text of this if I've got something wrong.
Failure to cache during resolution is covered by #2415, poetry's insistence on checking all sources for all packages is discussed at #5984
Solution 1 seems infeasible when working in a team with machines on different operating systems, due to the need of providing the complete URL of the wheel including operating system and exact version number.
Solution 2 seems to work, but it results in downloading every single PyTorch version that can found, independent on the operating system. I'm running Windows and the download looks like this:

A single installation takes around 15-20 minutes at ~250 Mbps.
Poetry will always download wheels for every platform when you install -- this is because there is no other way to get package metadata from a repository using PEP 503's API.
Can you elaborate a little on what metadata is needed and why downloading every conceivable version of a package yields that metadata? As mentioned this leads to a ~20min install for one package
Solution 1 seems infeasible when working in a team with machines on different operating systems, due to the need of providing the complete URL of the wheel including operating system and exact version number.
It's not convenient, but it should be feasible with multiple constraints dependencies.
Poetry requires the package's core metadata aka the METADATA file (most critically this includes dependencies), as well as the bdist/sdist itself for hashing purposes. Note that PEP 658 is a standard for serving the METADATA file that is implementable by third-party repositories, and PEP 691 specifies a (potentially) richer JSON API (including hashes) that third-party repositories could likely implement.
However, Poetry is unlikely to grow support for these new APIs until PyPI does, and I think third party repos are unlikely to implement it before PyPI. Eventually support for these APIs will allow for feature and performance parity in Poetry between PyPI and third-party repositories. Until then, we are stuck with the legacy HTML API, which requires us to download every package when generating a lock file for the first time.
After your cache is warm you will not need to download again, and on other platforms you will only download the necessary files as the metadata is captured in the lock file.
What I'm not understanding is that poetry knows I'm on Linux with python 3.8 but it still downloads
https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp37-cp37-win_amd64.whl
Or does that wheel not contain the core metadata that is needed?
Also there seems to be a second problem going on here unless I've misunderstood the documentation
I have this in my pyproject.toml
[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true
i.e. secondary = true
Yet poetry is asking that repository for every package I try to install. I thought from the documentation that secondary meant it would go to pypi for any package unless specifically asked to go to that custom repository.
What I'm not understanding is that poetry knows I'm on Linux with python 3.8 but it still downloads
https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp37-cp37-win_amd64.whlOr does that wheel not contain the core metadata that is needed?
Poetry constructs a universal lock file -- we write hashes to the lock file for all supported platforms. Thus on the first machine you generate a lock file, you will download a wheel for every supported platform. There is no way to write hashes to the lock file for those foreign/other platform versions without downloading them first.
If you want to reduce the scope of this a bit, you can tighten your Python constraint. There is a prototype of a new feature at #4956 (though it needs resurrection, design, and testing work) to add arbitrary markers to let a project reduce its supported platforms as an opt-in.
Also there seems to be a second problem going on here unless I've misunderstood the documentation
I have this in my pyproject.toml
[[tool.poetry.source]] name = "torchcu116" url = "https://download.pytorch.org/whl/cu116" default = false secondary = truei.e.
secondary = trueYet poetry is asking that repository for every package I try to install. I thought from the documentation that secondary meant it would go to pypi for any package unless specifically asked to go to that custom repository.
I think it might be you misreading -- that is the intended and documented behavior. There is a proposal to introduce new repository types at https://github.com/python-poetry/poetry/pull/5984#issuecomment-1237245571 as the current secondary behavior covers multiple use cases poorly, while being ideal for none of them.
Poetry constructs a universal lock file
OK this makes sense now, thanks for the explanation, looking forward to that PR hopefully being merged eventually
I think it might be you misreading
I was misreading I see that this is intended behavior
the current secondary behavior covers multiple use cases poorly
I agree with that and I hope that these new repository types can be implemented
Can't wait for option 2 to have good performance !
Please :+1: on issues instead of commenting me too -- it keeps the notifications down and still shows interest. Thanks!
Probably a follow-up issue on the second option:
If I do poetry install on that, I get
Installing dependencies from lock file
Package operations: 50 installs, 0 updates, 0 removals
• Installing wrapt (1.12.1): Failed
KeyringLocked
Failed to unlock the collection!
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/keyring/backends/SecretService.py:67 in get_preferred_collection
63│ raise InitError("Failed to create the collection: %s." % e)
64│ if collection.is_locked():
65│ collection.unlock()
66│ if collection.is_locked(): # User dismissed the prompt
→ 67│ raise KeyringLocked("Failed to unlock the collection!")
68│ return collection
69│
70│ def unlock(self, item):
71│ if hasattr(item, 'unlock'):
I guess it tries to load that from the secondary repo as well, and expects to use the keyring due to the unauthorized thing?
That's #1917 -- our use of keyring hits surprisingly many system configurations in which hard errors occur, and it needs some work.
@neersighted at least PyPI seems have started work on the new json APi
Indeed, Poetry 1.2.2 relies on the new PEP 691 support. However, PEP 658 is the real blocker for better performance in third-party repos -- there is a long-running PR blocked on review and a rather combative contributor, but otherwise no major progress on that front.
could you add a link to the blocked PR? I switched today to method 1 because method 2 took ages. Seems the meta servers are slowed down today... Dependency resolving which takes up to 4000 seconds for method 1 is also insane. And then it failed because I accidental copied 1.12.0 instead of 1.12.1 for the windows release. I really like the idea of poetry but this needs huge improvement.
We use neither these approaches.
My hacky solution to just install the base version of pytorch say 1.12.1 with poetry and then install the specific versions needed for the machines with pip in a makefile command.
As the gpu versions have the same dependencies as the base version this should be OK.
The big downsides are
- download both torch versions
- lockfile does not mach installed version (of pytorch)
- a change in the pytorch version requires and update to the makefile as well as the
pyproject.toml - any poetry action like
poetry addwill reinstall the base version - breaks Packaging - usually not a concern for ml projects
The big upside is that it is very easy to create make scripts for different machines - and that its pretty fast (very important for ci/cd) Example:
[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = "1.12.1"
torchvision = "0.13.1"
install_cu116:
poetry install
poetry run pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
install_cu116: poetry install poetry run pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
this has essentially been our approach, but in Dockerfiles, gated by a build arg
ARG TORCH_ARCH="cpu"
#ARG TORCH_ARCH="cu113"
RUN poetry install -vvv --no-root --no-dev \
&& pip install -U wheel torch==1.12.1+${TORCH_ARCH} torchvision==0.13.1+${TORCH_ARCH} -f https://download.pytorch.org/whl/torch_stable.html \
&& pip uninstall poetry -y \
&& rm -rf ~/.config/pypoetry \
&& rm -rf /root/.cache/pip
This also allows installing the CPU version which is smaller a smaller package that lacks the CUDA drivers that come in the normal pytorch package from pypi. They help to slim the images down.
Hi ! Do you know if it's possible to specify two different optional versions of torch in the pyproject.toml ? I would like to use a cpu version locally and a gpu version on a distant server. You can have a look at an example in this stack overflow post.
That is something not unlike #5222; the consensus has been that as Poetry is an interoperable tool, no functionality will be added to the core project to support this until there is a standards-based method. A plugin can certainly support this with some creativity and would be the immediate "I want Poetry to support this" use case solution in my mind.
#5222
Thanks for your answer, do you have any such plugin in mind ?
I don't have any links at hand, but building on top of Light the Torch has been discussed. But if you mean if I know anyone is working on one, no, not that I am aware of.
For as long as pip install torch does Just Work ™️, having to put any effort at all into installing torch via poetry will be a vote against using it in commercial ML work that requires torch or similar hardware-bound components. Assuming that one is open to consider poetry, conda, and pipenv as functionally equivalent options, of course.
I don't see that as a Poetry problem -- the issue is that there is an unmet packaging need, and no one from the ML world is working with the PyPA to define how to handle this robustly/no one from the PyPA has an interest in solving it.
Likewise, there is no interest in Poetry in supporting idiosyncratic workarounds for a non-standard and marginally compatible ecosystem; we'll be happy to implement whatever standards-based process evolves to handle these binary packages, but in the mean time any special-casing and package-specific functionality belong in a plugin and not Poetry itself.
@neersighted I respect your perspective. If Poetry has no interest in supporting ML world, perhaps as a non-standard and marginal corner of the Python developer ecosystem, then its idiosyncrasies are absolutely of no concern to the project.
@arthur-st poetry install torch works the exact same as pip install torch - they both install the ~CPU version only~ same version (CPU only on Windows/Mac, currently CUDA 11.7 on Linux - thanks for the correction @Queuecumber). If you want a specific GPU version of torch, you need to do much more than pip install torch (you need to be on linux, need to identify your host's CUDA version, find the right index url, pass the flags, ...). This limitation applies to all of the lock management tools (Pipenv, pip-tools, etc) and is only "possible" with pip because it doesn't tackle these locking/cross env problems at all.
I'd love to see torch distribute separate packages, like torch (cpu only), torch-cu113 (adds CUDA 11.3 support to the base torch package), torch-cu116 (same for CUDA 11.6), ... and then it'd much easier to specify platform specific dependencies (ie: only install torch-cu113 on linux) and help out all of these tools (still tricky to ensure the host has the matching CUDA version, but it would at least reduce the problem). I understand that can be hard with the compiled dependencies, but seems possible with the shared objects they bundle and clever version pinning.
@JacobHayes That doesn't quite reconcile with my experience. On Windows and Linux, you just need to refer pip to the respective official package index – easy enough to memorise even, e.g., https://download.pytorch.org/whl/cu116 for CUDA 11.6, /rocm52 for ROCm 5.2 – whereas the accelerated M1 version installs right off the PyPI with plain pip install torch.
Poetry, on the other hand, just spent the two hours of patience I had for revisiting it today on failing poetry add torch in an empty Python 3.8 project created specifically to test that single operation. The tool was alternating between not being able to find any torch packages at all and failing to find platform-specific CUDA dependencies for my M1 Mac, depending on where in the troubleshooting process I was at a given moment. Amusing as it was to see that it correctly identifies the need for a platform-specific dependency (even if that was for a CUDA package of all things), while not even trying to simply install a Mac-specific version of the package itself (the only dependency of which is typing_extensions of any version), the value proposition for using poetry in commercial ML projects is not there yet, and my two cents of feedback are that improving just the docs will unlikely cut it for the ML crowd. That said, I am, of course, just a single person, and could very well have held poetry wrong, and will still be keen to revisit it during my next toolchain review.
Maybe I'm misunderstanding but I think there's some confusion
Ref: https://pytorch.org/get-started/locally/
First of all (on Linux) pip install torch does not install the CPU version, it installs the default cuda version which is currently 11.7. If you want to install a cpu version you need to use a different command: pip3 install torch --extra-index-url https://download.pytorch.org/whl/cpu. This requires an extra index URL but it also identifies a different package version using the + convention.
So you could do something like
[tool.poetry.dependencies]
torch = { version = "^1.12.1+cpu", source = "torchcpu" }
[[tool.poetry.source]]
name = "torchcpu"
url = "https://download.pytorch.org/whl/cpu"
default = false
secondary = true
if you want CPU support, or if you want a specific cuda version:
torch = { version = "^1.12.1+cu116", source = "torchcu116" }
[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true
The issue that is being discussed on this bug report is that it's really slow/throws a bunch of warnings depending on which command you run. Also I have no idea if this works on windows, but you probably shouldnt be using windows.
Do you know if it's possible to specify two different optional versions of torch in the pyproject.toml
I was actually pretty surprised to find that this doesn't work with config groups and I think fixing this should be considered. I think something like the following should be possible:
[tool.poetry.group.gpu]
optional = true
[tool.poetry.group.gpu.dependencies]
torch = { version = "1.12.1+cu116", source = "torchcu116" }
[tool.poetry.group.cpu]
optional = true
[tool.poetry.group.cpu.dependencies]
torch = { version = "1.12.1+cpu", source = "torchcpu" }
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[[tool.poetry.source]]
name = "torchcpu"
url = "https://download.pytorch.org/whl/cpu"
default = false
secondary = true
[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true
where you can then do poetry install --with cpu for your cpu environment or poetry install --with gpu for your cu116 GPU environment. But the dependencies in all groups need to be consistent so you get Because test depends on both torch (1.12.1+cu116) and torch (1.12.1+cpu), version solving failed.. I dont think that should be enforced for optional groups unless the user tries to actually install two groups with conflicting packages
Mutually exclusive groups is #1168 -- this is not a matter of "enforcement" but of core design and architecture of the solver; changing it has implications for correctness, maintainability, and performance, but should be possible with significant care and effort. Please keep discussion of that feature on that issue.