-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use CI to build release binaries #200
Comments
Yes, this would be great if it could be automated. I think travis now has OSX guests and we have appveyor for windows, so it should be possible for most of these. I was building freebsd and linux-686 in virtualbox images on my workstation, but I don't think those are as needed as others if it proves to be a problem. |
Enough if win-x64,Mac-x64,Linux-x64 auto github-release for ci.Others are optional |
Sure, that would be the bare minimum. Next important on the list I think would be arm32 and arm64 variants. Those platforms are getting more and more popular. |
That means arm platform need manual release, I do not have news about arm free CI platform |
Sometimes you can build them inside docker or chroots using qemu-user-static to emulate arm CPUs |
I'm going to start working on the Windows targets now. Will do the Linux/Mac ones after that. |
Only does the deploy phase for tagged commits Contributes towards luvit#200
Only does the deploy phase for tagged commits Contributes towards #200
For Linux:
For Mac:
For ARM:
For FreeBSD:
|
Only does the deploy phase for tagged commits Contributes towards luvit#200
@squeek502 I'll continue to finish Mac base your code. |
Here goes nothing:
EDIT: Appveyor failed due to auth_token:
Either I messed something up or we'll need to use an auth token from a different account than mine. |
I do |
I went with a different approach, and appveyor seems to be working now: https://github.com/luvit/luvi/releases/tag/v2.9.3-temp Will clean things up once the build finishes: I'll move the Windows binaries to the v2.9.3 tag and delete the v2.9.3-temp tag. Then I'll revert c6a410e |
Sorry for that, I will do nothing in next 4 hours. Please re tag v2.9.3 --force |
Ok, I think we're good to go. The Windows binaries have been moved to the Here's an example of the random build error I mentioned in #202 btw: https://travis-ci.org/luvit/luvi/jobs/513318947 |
Checked package download from https://github.com/luvit/luvi/releases/tag/v2.9.3 test pass.
|
Tested the rest except the Windows .lib files. Everything seems good. |
to @squeek502 |
Using docker & holy-build-box has portability benefits: https://github.com/phusion/holy-build-box |
I've had good experience using docker combined with qemu-user-static to build stuff in arm containers on x86 kernels/hosts |
Looks like we can use Github actions to build arm binaries. @truemedian is doing just that here: https://github.com/truemedian/luvit-bin/blob/main/.github/workflows/manual.yml#L154-L196 |
My approach in luvit-bin for ARM binaries is.. not the best, nor would I recommend it. It requires running a full system vm on github's actions machines, which turns a 2-3 minute build process into a 30-40 minute build process |
Surely it's better than nothing. If we don't want to take the hit for every CI, we could only build arm for tagged releases. |
I decided to take another stab at this for building just luvi (this can be rather easily extended to both lit and luvit if need be). ARM + OpenSSL binaries take the longest, so thats what these times are going to based off of. from scratch: 37m 31s. Surprisingly, this is about on par with the current CI setup in terms of speed (when the ccache is full, having to compile slows down the process massively for ARM builds) This allows for multiple things:
A few things of note:
|
Honestly, one thing I wanted to change before releasing 2.15.0 is updating miniz to latest master branch, but I am having issues building and testing it so cba at the moment |
Should be ready now, miniz and LPeg have been updated to latest. |
Indeed. As for the remaining checkboxes: We now support aarch64 / arm64, but I guess we have to drop arm6 and arm7? FreeBSD is supported but not with Actions. |
We don't necessarily have to drop armv6 and armv7, however there are very few existing and maintained projects aimed towards wide glibc support. I didn't include them in my CI overhaul because I couldn't find a nice solution towards the maintenance of such an environment. However, if one exists then it would be relatively trivial to add it to the build process. |
Try download arm cross build chains from https://developer.arm.com/downloads/-/gnu-a. |
There are two problems there: We need to build with an old enough version of glibc that it will work on practically any distro using luvi. We cannot cross compile. CMake does not understand the difference between host and target binaries. This distinction is required to build Luajit, and our build process is not suited to cross compile OpenSSL. |
Yes, We need to build cmake toolchains config for that, I think consider it as long time tasks, not done that in 2.15.0 |
Just took a look around and https://github.com/vmactions/freebsd-vm seems it may be a decent fit for building FreeBSD binaries, the same organization also has similar actions for openbsd, solaris and dragonfly if we'd like to support those |
The rest of remaining architectures are of lower-priority to a lot of people, I believe. Supporting arm64 is the most important one beside the other major architectures because of the new ARM chips, I am not sure if Raspberry Pi is arm64 or armv7 but that could potentially influence the decision. We can probably take a similar approach for armv6 and armv7, having QEMU do its magic. I think it is fine to have a luvi release as of the current state. |
Currently only have a few precompiled binaries for Luvi's newer releases. We've been meaning to set this up for a while but never gotten around to it. Example of when release deploying was added to luv via travis: luvit/luv@6b901a2
Related issues: #187, #181
Here's what we had in v2.7.6: https://github.com/luvit/luvi/releases/tag/v2.7.6
The text was updated successfully, but these errors were encountered: