Speeding up the CI/CD pipeline
CI takes too much time to run.
Using ccache in #146 helps some, but not on Windows (need to get it to work there). The following can be a starting point for making ccache work on Windows, also see https://github.com/ccache/ccache/wiki/MS-Visual-Studio:
[tool.cibuildwheel.windows]
before-all = "choco install ccache"
before-test = "ccache --show-stats"
build-verbosity = 3 # For debugging.
[tool.cibuildwheel.windows.environment]
CC = "ccache.exe cl.exe" # Seems to be ignored?
The other thing to try is breaking up CI runs into per-platform, per-version. Maybe Cython could also be using the stable Python ABI, so we compile just once for many versions.
reposting from https://github.com/harfbuzz/uharfbuzz/pull/146#issuecomment-1383801111
to reduce macOS build time, I think we don't need to build both arm64 and universal2 wheels (in addition to x86_64 wheels), we can get away with only building universal2 (covering both x86_64 and arm64) and x86_64 for legacy stuff that doesn't understand universal2
breaking up CI runs into per-platform, per-version
I think running per-version builds in parallel that should yield the best speed gains, I did that for skia-pathos for building linux-aarch64 wheels (slow because their are compiled inside qemu).
Also it is not really necessary that we build all the wheels at every single commit push. We could only build a bunch of them (the min and max supported versions on linux) every time, and build all wheels only on tagged commits maybe. We could push a pre-release tag to test out build on the rest of the environments before making a stable release.
Or we could even move the full wheel-building logic to a separate repository like I did for github.com/google/brotli-wheels and inside this repo keep the CI to the minimum
Maybe Cython could also be using the stable Python ABI, so we compile just once for many versions.
that'd be great that support for that is still experimental, tracked in https://github.com/cython/cython/issues/2542