Update condaforge/miniforge3
Update to match https://github.com/rapidsai/miniforge-cuda/pull/62
Supersedes #643
Issue seems to be that you're changing python version in an already-built environment. That's very fragile. Here, there's a library that was introduced with python 3.10, truststore. Python 3.9 doesn't actually need it, but because it is already in the environment, mamba does not know that it can get rid of it:
15.38 The following packages are incompatible
15.38 ├─ python 3.9** is requested and can be installed;
15.38 └─ truststore is not installable because it requires
15.38 └─ python >=3.10 but there are no viable options
15.38 ├─ python [3.10.0|3.10.1|...|3.12.3] conflicts with any installable versions previously reported;
15.38 └─ python 3.12.0rc3 would require
15.38 └─ _python_rc, which does not exist (perhaps a missing channel).
There are several ways around this:
- Create a new environment instead of modifying
base - Create a base miniforge install that has an older python in the base environment
- create multiple miniforge images, one for each python version that we use
- probably other ways too, if none of these are palatable
Latest code changes adjust the Dockerfile for the raft-ann cpu benchmark, using micromamba to populate the environment in /opt/conda.
Relative to using the miniforge3 docker image, this approach avoids changing python version in an already created environment, which was the cause of the conflict issue.
@raydouglass the builds are all working now, but there's a failing test in the cuspatial/ZipCodes_Stops_PiP_cuSpatial.ipynb notebook. It's hard to decode the traceback from all the markup stuff, but it seems like it might be related to cudf. https://github.com/rapidsai/docker/actions/runs/10207287173/job/28242755845?pr=660#step:9:169
How should we handle this? Try to fix that notebook issue in a separate PR (maybe separate repo?) and then re-run the failed jobs here?
How should we handle this? Try to fix that notebook issue in a separate PR (maybe separate repo?) and then re-run the failed jobs here?
I can't tell if it's the exact same error, but the same notebook has an open issue: https://github.com/rapidsai/cuspatial/issues/1426.
Since it's unlikely to be related to the docker image itself, I think we can merge this PR and the notebook issue will be resolved in cuspatial.
Just realized I cannot approve since I opened this PR originally. But would be good to get a review from @dantegd and @AyodeAwe maybe.