The Compiler support topic
Like other PRs before this one, #1719 brings the topic of compiler support (and then, compiler features and compatibility):
- https://github.com/DaemonEngine/Daemon/pull/1719
As @VReaperV said in that comment:
trying to support a large zoo of compiler versions is an issue in itself.
I like the ability to build on various compiler families (MSVC, GCC, Clang, etc.), as they may report very different kind of issues. I actually have a Zoo of compilers I test building the engine with (and running the build), which I find interesting and useful in many aspects.
But what I'm far less concerned about are compiler versions. I don't really mind about supporting GCC 10 if we can run GCC 14, or supporting Clang 13 if we can run Clang 20… A couple of compiler versions from a lineage is useful when checking for regressions, but otherwise, I'm less concerned about the versions.
Also, I would like to be able to use the latest compiler tech to build our releases, when possible. One motivation is optimization. Not only newer compilers may have implemented more builtin optimizations, but for example I know that our current build of turbojpeg for the arm64 build is skipping some NEON code because the compiler in old-old-stable Debian is too old.
I have an effort to migrate our release build environment to Debian old-stable, that will bring a newer compiler. Linux is annoying in the fact you cannot target a specific libc version when compiling, so to target an old libc version, you usually build on an obsolete distribution.
As soon as we can install a newer compiler on an old distro for that libc trick when building a game release, I believe we don't have to support the stock compiler, because what we want is the old libc, not the old compiler.
What is important is that people can build the game on the current stable distributions by relying on the stock compiler.
I see those options:
- For Linux release builds, we can build on old stable Debian using Clang from apt.llvm.org, this always brings the latest stable clang (currently Clang 20), in fact they even provide it for the old-old-stable Debian.
- For Windows release builds, our only option is MinGW, which is GCC, which has no backports unlike Clang. But then, the Linux libc of the distribution running MinGW has no importance at all, so we can use the Debian stable to build Windows releases, or even more recent Debian-like distros like Ubuntu, and get a more recent MinGW.
- For macOS, AppleClang makes possible to target an old mac libc even when building on the latest macOS with the latest AppleClang, but our current Darling-based docker build is using an old XCode.
There is a similar topic about CMake versions to support, but CMake provides downloadable builds for Linux (usable for Linux and MinGW builds) and for macOS, our Darling-based docker build already uses some downloaded CMake this way (but not the latest because of Darling compatibility).
I don't mind having our docker release script running the latest CMake and the latest Clang on an old-stable or even an old-old-stable distribution, and raising both cmake and compiler requirements, as long as current stable and LTS distributions can build the game out of the box.
I remind we have an ongoing effort to upgrade our NaCl compiler from PNaCl 3.6 to Saigo Clang 20, so once it's done we would be able to massively bump our required C++ versions and other things and get new toys (and forget about the past).
Some links:
- My effort to bump
external_deps, some libraries are even not using the latest version because the compiler or the cmake in old-stable it too old: https://github.com/DaemonEngine/Daemon/pull/1433 - My effort to bump the Linux distribution baseline in our docker release build script: https://github.com/Unvanquished/release-scripts/pull/40#discussion_r2225875170
- An old attempt of mine to get newer compilers on old Linux distros for building the release: https://github.com/DaemonEngine/Daemon/pull/875
The real cause of libc incompatibility is not building with a newer header, but using new functions. It uses symbol versioning so backwards incompatibility only happens when you uses a specific symbol that is too new for the libc being dynamically linked. In our own codebase it is easy to avoid adding any fancy new syscalls. So the main problem is dependencies. If prebuilt dependencies for the distro package manager are used, they might depend on new stuff from the distro's libc. If we build our own dependencies, they might have tricky configure scripts that switch on the use of features that are new as possible given the system libc. So it is possible to use a new libc when building, if you are willing to go through all the dependencies and squash all of the dependencies' build options that cause new libc functions to be used. I did that for the updater.
For building Mac releases with an up-to-date toolchain, I and I believe @DolceTriade have systems that are capable.
Some versions I gathered:
| Distro | CMake | CMake backport | CMake upstream | GCC | Clang upstream | MinGW | AppleClang |
|---|---|---|---|---|---|---|---|
| Debian old-old-stable 10 Buster | 3.13.4 | 3.18.4 | 4.0.3 | 8.3.0 | 20.1.8 | 8.3.0 | |
| Debian old-stable 11 Bullseye | 3.18.4 | 3.25.1 | 4.0.3 | 10.2.1 | 20.1.8 | 10.2.1 | |
| Ubuntu LTS 24.04.2 Noble | 3.28.3 | 4.0.3 | 11.0.1 | ||||
| Ubuntu 24.10 Oracular | 3.28.3 | 4.0.3 | 13.2.0 | ||||
| Darling 0.1.20220929 | 3.28.6¹ | 9.0² | |||||
| macOS 15 Sequoia | 4.0.3 | 16.0.0 |
¹ Newer versions of CMake are available but don't run on that Darling build. ² Newer versions of XCode may be available, but installing them is much more painful.
Edit: Bumped Darling CMake from 3.16.9 to 3.28.6.
The real cause of libc incompatibility is not building with a newer header, but using new functions. It uses symbol versioning so backwards incompatibility only happens when you uses a specific symbol that is too new for the libc being dynamically linked. In our own codebase it is easy to avoid adding any fancy new syscalls. So the main problem is dependencies. If prebuilt dependencies for the distro package manager are used, they might depend on new stuff from the distro's libc.
Oh! I thought it was because of linking against the old libc may bring newer symbols from same functions or things like that.
The good news is that I'm switching the release build to building the deps as much as possible (to also ensure Linux builds don't get very outdated libraries).
For building Mac releases with an up-to-date toolchain, I and I believe @DolceTriade have systems that are capable.
Well, having to synchronise people across the globe for building and testing is painful, but even I can build on recent macOS.
My main concern is that building the fully scripted way in docker is much more convenient.
Currently we're stuck with a very old xcode and cmake with the dockerized macOS build because we are stuck with an old darling because Darling hasn't published a build in years (and the latest build had a bug)…
I assume at some point there will be native support for aarch64 macOS for the engine at least? Given that it now compiles and runs on a raspberry pi (let's stop focussing on that now, really).
Is the wasm migration still an achievable goal? Last time this was discussed, it was lacking some features required for our use case. #227 relates.
P.S. Debian Buster is EOL, and Bullseye EOL next year is it not? Is it even worth using these?
Is the wasm migration still an achievable goal? Last time this was discussed, it was lacking some features required for our use case.
And AFAIK the missing feature was setjmp/longjmp, which has now been implemented in WASM, so it shouldn't be a roadblock anymore.
P.S. Debian Buster is EOL, and Bullseye EOL next year is it not? Is it even worth using these?
Using Buster or Bullseye to build the engine has nothing to do with supporting those distributions
The idea is to build against a libc that is older than the current stable distributions, to make sure the binary runs on every current stable distributions.
If we were building on a current stable distribution, for example the current LTS Ubuntu which is from April 2024, it may not run on the current stable Debian, which is from June 2023. By building on the old-stable Debian, we make sure the build runs both on the current LTS Ubuntu and the current stable Debian, and very likely on every stable distribution that is not older than the old stable Debian, which includes every stable distribution that is newer than the old stable debian.
So we intentionally use something that is older than Debian stable, not because we want to support the old stable Debian, but because we want to support what came after the old stable Debian, including non-Debian distributions that came after the old stable Debian.
Otherwise, if a given distribution would have released its current stable release only some months before Debian releases its current stable, our binaries may not be running on such current stable release. That would be a shame.
On the contrary us building against the old-old-stable Debian is just because we haven't updated that because of “please don't break something that works”, but I'm now working on it anyway.
The idea is to build against a libc that is older than the current stable distributions, to make sure the binary runs on every current stable distributions.
That doesn't make any sense.
If we were building on a current stable distribution, for example the current LTS Ubuntu which is from April 2024, it may not run on the current stable Debian, which is from June 2023
Trixie is releasing in 5 days...
That doesn't make any sense.
This is what had been said to me and is very likely why we did it that way. It's just a fact about how we came here (it's a fact, we have to accept it even if we dislike it), it has nothing to do about where we may go and how.
As described in this thread:
- @slipher has detailed another way to achieve the same result.
- I'm myself working on relying less on librarires provided by the system, which would help on not needing that.
Trixie is releasing in 5 days...
No one said we will not switch to Bookworm as old stable, neither we will not switch to Trixie once it comes out if we can ensure (like we did with the updater) that our build script still produces binaries with wide compatibility.
I was answering about the EOL thing, I'm just saying that the reasons why we built on these EOL things has nothing to do with EOL or not EOL.
EOL or not EOL is not on the topic, I'm just saying that the reason our build scripts use old distribution is not because of any EOL thing, and is entirely unrelated to the EOL topic.
Trixie being released in the coming days or in months of years is only very indirectly related to what we do, we're not maintaining and releasing Debian. Debian EOL or not EOL is not our concern. We may use old things even if Debian says it's EOL on their side, and we may use new things even if Debian doesn't have them yet in unstable, all at the same time. Debian this or Debian that being EOL or not is not our concern, in no way.