Suggestion: Automatic builds for macOS / Linux
Would it be possible to set up some GitHub Actions (or another CI system) to automatically build and archive Beef for macOS and Linux? Manually building from source is not ideal because of the lengthy and intense LLVM build step. It'd be much nicer to have the option to download a prebuilt archive.
While with GitHub actions it should indeed be possible for it to do so, I don't think it will be able to. To give a bit of explanation as to why I believe it would not work, I'll reference a small section of info found here about the virtual machines github uses for it. In particular, the limitation of 14gb of storage space to use is the main issue. I cannot confirm it for mac, however I have been unable to get it to build on Linux even with 20gb of space. This can be gotten around by creating a self-hosted runner for it, but that would mean someone would need to maintain it aside from GitHub themselves. Furthermore, it would also require a machine (virtual or otherwise) for each system it is building for. Another option for a CI system would be more or less similar to a self hosted runner, with similar limitations, the main difference being that it's not connected to github actions, and it requires knowledge of Jenkins.
To sum the big block of text up, unless the build can either take up less space or someone puts in the work to maintain a build system for it, it wouldn't be able to happen, however it is certainly not impossible and can be done so long as you have sone knowledge in the area and enough money to throw a bunch of various computers at it.
TLDR: I'm looking for platform contributors/owners.
There's already a CI system which runs test and builds binaries from Windows and Linux, which is triggered by GitHub Actions. macOS is currently just manually tested occasionally. Archiving "nightly builds" is a separate issue from "official builds", but I do already have plans to do some sort of nightly Windows builds at some point...
One of the reasons that there aren't any forms of binary distributions for macOS/Linux yet are that I just haven't had the time to dedicate to figuring out how it's actually supposed to be done. Someone could accelerate this by contributing platform knowledge and work:
macOS:
- Fix the actual build.sh script to properly link on macOS. It obviously "works on my machine" but obviously others are having issues linking to libffi. Probably something simple.
- Contribute a script to build an "installer" in whatever form is appropriate for macOS. This can't just be a dump of files, it's also necessary that old files be removed from the target location when updating versions (otherwise if you rename a .bf file, for example, both the original version and new version would be found which will cause build errors).
- Do people expect to install things like languages from package managers on this platform? If so, how do we integrate builds with that?
- Windows has great support for user crash debugging - when we build installers, we tag the debug information with Source Server information which ties the build to the actual github source versions, then we submit debug information and binaries to a Symbol Server. When a user crashes they send a minidump file which we can load in a debugger and automatically pull down the pdbs and source code to inspect the crash. Is there any sort of equivalent of any of these things in macOS?
Linux:
- Also the build.sh is having problems on this platform and no one has offered up any assistance yet.
- What are the binary-release constraints of Linux? Even if we assume x86-64, are there binary compatibility issues between different distros or kernel versions or anything else?
- How would a binary even be released? Package managers?
I just checked what Rust does. They have a curl command you're supposed to copy and run, which detects your architectures and such and downloads and installs the appropriate binaries. Is this the right way? From my Windows mindset it seems pretty strange but these are not platforms I "live on" so I can't properly judge what's proper there.
I've been thinking about this some more recently. I can't speak for Linux, but for macOS...
- Fix the actual build.sh script to properly link on macOS. It obviously "works on my machine" but obviously others are having issues linking to libffi. Probably something simple.
I know there's been some work on the build scripts since this issue was posted, so maybe this isn't a problem anymore...?
- Contribute a script to build an "installer" in whatever form is appropriate for macOS. This can't just be a dump of files, it's also necessary that old files be removed from the target location when updating versions (otherwise if you rename a .bf file, for example, both the original version and new version would be found which will cause build errors).
I have no idea on this one. My personal inclination would be to skip a downloadable installer program altogether, and instead rely entirely on package managers and pre-packaged zip releases on GitHub. Other languages take this approach, such as Nim.
- Do people expect to install things like languages from package managers on this platform? If so, how do we integrate builds with that?
Yes. Most macOS programmers use the Homebrew package manager for installing non-XCode development tools. It's extremely popular and should be fairly easy to integrate. My current understanding is that we would need to create a Beef "formula" (package definition) and, using that formula, create a "bottle" (binary distribution) for each tagged release. For an example, you can see Nim's formula. I'm not totally sure how uploading the packages works, but it's probably not too complicated.
- Windows has great support for user crash debugging - when we build installers, we tag the debug information with Source Server information which ties the build to the actual github source versions, then we submit debug information and binaries to a Symbol Server. When a user crashes they send a minidump file which we can load in a debugger and automatically pull down the pdbs and source code to inspect the crash. Is there any sort of equivalent of any of these things in macOS?
Sort of... There's a system error log where crash logs can be viewed. But if there's not a dedicated installer program anyway, I'm not sure if this matters. If brew has a problem installing a package, it will loudly complain about it, in detail. From there it's just a matter of copy-pasting the output into a Github issue.
I just checked what Rust does. They have a curl command you're supposed to copy and run, which detects your architectures and such and downloads and installs the appropriate binaries.
That approach is pretty nice for multi-platform development, but it might be a bit overcomplicated for Beef's purposes. I think a good start would be to just provide x64 binaries for macOS and Linux, with other architectures maybe being supported in the future. (Like armv8 on macOS for Apple Silicon.) This makes cross-platform development much more accessible, even if it doesn't cover 100% of potential use cases.
Thanks the info. One day I do hope to find platform contributors to help set this stuff up.
@bfiete I'm interested in helping with platform contributions. Where should I start? Shall I just tinker with the considerations voiced in this issue and submit a PR? Or is there more kinda RFC-ey way you'd like to go about this?
Starting by proposing some solutions/directions, or even summarizing what other projects have done would be a good start.
2. What are the binary-release constraints of Linux? Even if we assume x86-64, are there binary compatibility issues between different distros or kernel versions or anything else?
To my knowledge, binary compatibility issues across Linux mostly are caused by problems with dynamically linking libraries which may or may not be present or in a consistent location. I don't think this is an issue for beef however, since (from what I can tell by poking around with ldd and such) BeefBuild doesn't try to link to anything that isn't part of the Linux Standard Base. I messed around for a while with trying to test a build of beef on different machines, but got hung up on libIDEHelper.so seeming to only look for libhunspell.so in the static, absolute path where it was at compile time. I'll probably try again with this tomorrow, but I suspect it will work fine once it can find everything. I'm sure there is also some way to compile everything such that all the .sos can be packaged together neatly for a release but I don't know much about that.
3. How would a binary even be released? Package managers?I just checked what Rust does. They have a curl command you're supposed to copy and run, which detects your architectures and such and downloads and installs the appropriate binaries. Is this the right way? From my Windows mindset it seems pretty strange but these are not platforms I "live on" so I can't properly judge what's proper there.
There are basically four main ways (that I know of and can think of) that this is usually done: package managers, DIY package managers (what rust does), containerized package managers, and just providing a .tar.gz with binaries. I'm not an expert on any of these, but I'll summarize what I know:
- Package managers: The older, more standard solution. This generally works pretty well, but I imagine can be a bit time consuming due the variety of package managers and package formats used by different distros.
- DIY package managers: This seems to be growing in popularity recently. You trade having to deal with different package formats for having to build and maintain your own package system/version manager. How rust does this is a bit unusual, as telling people to curl a script into bash from some URL isn't super secure, but beyond that this seems like a pretty good solution.
- Containerized package managers: These have become trendy recently as they allow you to support a lot of different distros while only having to worry about compatibility with one. This seems like it might be a bit tricky to get working for something like BeefBuild, but it is an option.
I think I would recommend simply providing .tar.gz and .deb packages since these are pretty straightforward to make and are convenient, or at least usable, for most Linux users.
Ok, I did some testing. I built BeefBuild on my Pop_os 20.10 system, and tested it on Ubuntu 20.10, Arch Linux, and Ubuntu 18.04. All systems were able to compile and run hello world and space game without issue, except for the 18.04 system where BeefBuild didn't run since it expected glibc 2.32 and that system only had 2.30. This seems to be just a backwards compatibility issue- the arch machine was using glibc 2.33, so this could probably be fixed by just compiling on an older distro or libc release.