Investigate Segregated Dynamic Linking
https://debconf18.debconf.org/talks/48-segregated-dynamic-linking-keeping-the-peace-between-incompatible-dependencies/
Using a library without being exposed to any of its dependencies & without requiring recompilation of the binary that pulls it in.
The goal is to improve portability of programs that are distributed with runtimes but which still need access to some libraries from the host (eg libGL) without requiring recompilation of either the host libraries or the application+runtime.
The background: https://people.collabora.com/~vivek/dynamic-linking/segregated-dynamic-linking.pdf
The code: https://gitlab.collabora.com/vivek/libcapsule
Looks like this can be used when the AppImage and the system libGL, libstdc++ are incompatible.
Hello @fledermaus do you think Segregated Dynamic Linking could be useful to run applications compiled on newer Linux distributions on older Linux distributions? Then it might be very, very useful for making what goes into AppImages.
So far we are recommending application developers to compile applications on the oldest still-supported distributions, but in some cases the applications don't even build on those anymore.
And in some cases we need, e.g., a newer libstdc++. See here.
Is there a "hello world" example that we could try out?
On Sun, 15 Sep 2019, probonopd wrote:
Hello @fledermaus do you think Segregated Dynamic Linking could be useful to run applications compiled on newer Linux distributions on older Linux distributions? Then it might be very, very useful for making what goes into AppImages.
It depends - libcapsule can't insulate you from API/ABI changes that are simply not available in the older distribution: However we are working on ways to automatically pick the newest libc available and run the application with that, while picking a subset of libraries from the host OS.
So far we are recommending application developers to compile applications on the oldest still-supported distributions, but in some cases the applications don't even build on those anymore.
In combination with the automation mentioned above, and shipping some fallback libraries with the aplication image I can see it working.
There's still some work needed on libc and binutils to make it truly automatic and transparent (which I was working on this time last year and should be able to pick up and work on again soon - work has been hectic).
I think shipping any form of libc/libstdc++ is not only bloat-y but might be a security hazard, similar to how shipping crypto libs is. It might be okay sometimes (e.g., when you just need to calculate some hashes for some non-security purposes), but often it's not.
Have you considered that yet? Is there any references you can share?
On Fri, 20 Sep 2019, TheAssassin wrote:
I think shipping any form of libc/libstdc++ is not only bloat-y but might be a security hazard, similar to how shipping crypto libs is. It might be
No, not really. As mentioned before the infrastructure automatically picks the newer libc since applications work with newer libcs just fine but can fail to run-time link with an older one (missing symbols).
Have you considered that yet? Is there any references you can share?
Yes, that is why the tooling being developed picks the most recent version: As the host OS is updated it will eventually acquire a newer libc and that is what will end up being used.
@fledermaus is there a "hello world" of a C++ app that we could try out? I'd like to play with it a bit but am not sure where to even start.
No, not really. As mentioned before the infrastructure automatically picks the newer libc since applications work with newer libcs just fine but can fail to run-time link with an older one (missing symbols).
I guess you define "newer" by the version number, right? That's kind of myopic, though. There's no libc 1.2.3 or 2.3.4, every distribution ship their own set of additional patches. After all, that's how long term support works. Old versions receive patches in case there's security issues. So a 2.19 becomes a 2.19-18+deb8u10 for instance. Check the changelog for how many CVEs have been fixed: https://metadata.ftp-master.debian.org/changelogs//main/g/glibc/glibc_2.19-18+deb8u10_changelog
So, in fact, a 2.19-18+deb8u10 can be "newer" than an unpatched 2.20. And when you ship and use the 2.20 because 2.19 is too old for you, then you have an unpatched insecure library on a system, even if it's updated regularly.
Yes, that is why the tooling being developed picks the most recent version: As the host OS is updated it will eventually acquire a newer libc and that is what will end up being used.
Even if a host system is updated, it doesn't mean it's hopping releases. A computer may very well run some LTS release or Debian oldstable or something like that. So I don't think your argument will bear.
We constantly have to deal with older, still-supported releases, and it's really annoying at times. I'm very interested in a solution that doesn't bind people's development to the lowest common denominator. Your approach really seems right. I just want to evaluate potential security issues, especially in the long term.
I know pure version number is not perfect but there really doesn't seem to be a better solution - the metadata re CVEs and fixes is not available in any standard form. You could look at the exposed API (what symbols are available) but even then if one is not a strict superset of another, how do you decide?
Having said that I think one of my colleagues is working on an approach with the API set comparison, so we'll see how that goes once they have it working.