-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publish wheels to pypi #62
Comments
Agreed! I have been pre-building my own wheels too, but instead of running my own pypi server, I copy the wheel file to target device (Raspberry Pi) and install it with the command: pip install -r requirements.txt gpiod.whl Where the @brgl, let us know if you would like a contribution (PR) towards automating the building and publishing of wheels using GitHub Actions (free of charge for public repos). I have actually found a short guide in the |
Ive spoken to @brgl about this, and he's generally in favor of doing something but hasn't had time to look into it. Maybe a quick PR to generate wheels is a good first step to show how it'd work. Im not sure if other kernel scm mirrors have set precedent of including gitlab or github CI files. Generally, releases are triggered on tags and the python bindings follow their own versioning cadence semi independent of the libgpiod version, so maybe python binding specific tags can live here for CI purposes. |
It's evident by now that I don't and most likely will not have the time to address this. After the xz fiasco I'm also reluctant to have other people prepare binary releases of any kind. @pdcastro @vfazio Do you know what it would take (within the libgpiod repo) to make it possible to generate wheels for a number of popular architectures? Let's say x86-64, arm, aarch64, mips? |
@brgl i don't think itd be too difficult. The first step is identifying how you're creating the sdists. Once thats determined, building the actual wheels is pretty straightforward. Then we'd need to set up a pipeline stage that uses secrets only you have access to and are stored in github to perform the actual publish via twine or some other tool. This stage would only happen when you tag a new release for the bindings, so that would need to be defined too, but we should prioritize just getting things automatically building for now. For architectures, wheels via cibuildwheel are limited to x86-64, i686, arm64, ppc64le, and s390x. Im not sure why they don't have mips containers and arm is a hellscape due to armv7/armv6 differences and i dont see any tagged manylinux images available at a glance. I think getting x86-64 and arm64 up should be priorities since rpi and friends are now shipping 64bit userland for armv8 boards. Piwheels can worry about building arm 32bit variants if they want. My availability varies a lot based on fires at work but I can try to work on this sometime this or next week unless @pdcastro has time they can guarantee to work on it sooner. |
I prepare them like this: |
I haven’t setup GitHub Actions before but I was doing some reading. Regarding “secrets” and the tool for publishing:
This action is used by the GitHub Actions ”official” workflow template (python-publish.yml) that is recommended by GitHub when I visit the Actions tab of my fork of the libgpiod repo: That particular template is several years old and still uses a password attribute, but the current pypa GitHub Action explicitly documents support of Trusted Publishing. Meanwhile, twine (which I have not personally used before) does not seem to mention Trusted Publishing in its documentation. Instead it mentions username and password. I would be inclined to try using pypa’s GitHub Action to publish the wheels to PyPI, with Trusted Publishing. |
Let’s talk a bit about this. I gather that a GitHub Action workflow can be manually triggered and given some input parameters (?), and we could start with that, but let’s also talk about what @brgl would be comfortable with as the end result. @vfazio previous mentioned that libgpiod’s Python Binding releases “follow their own versioning cadence semi independent of the libgpiod version, so maybe python binding specific tags can live here for CI purposes.” Indeed, at the moment I gather that the latest gpiod release on PyPI is version 2.1.3. Not only this doesn’t match the version number (tag) of libgpiod on GitHub and kernel.org (where the latest version is currently 2.1.1), but also the latest version of I can think of several arguments to justify this arrangement. Perhaps the changes from libgpiod 2.1.0 to 2.1.1 are not relevant to the Python Bindings and therefore a new release of the Python Bindings on PyPI.org would be “just noise” that might trigger users to update their app unnecessarily. Or a scenario where a tiny change that only affects the PyPI release — say, PyPI metadata about the package itself — might warrant a new release of the PyPI package but not a new release of libgpiod. However, I suspect that making such considerations — deciding for sure whether some code change requires new releases to PyPI or not, what version number to use, and how/when to trigger the new release — is extra work for maintainers, compared to fully automated releases from a single source of truth. Also, in my view, the “principle of least surprise” and the conceptual simplicity of “same version number everywhere” outweigh the benefits of different version numbers. In short, my opinion is that whenever a new tag or release happened on GitHub, this should automatically cause a new release to PyPI.org, using the same version number. Not just the wheels, but the source as well, README, PyPI metadata (that is stored in the repo)... All the maintainer needs to do is create a tag on kernel.org / GitHub. The rest is automated through GitHub Actions. That’s my opinion but we don't have to do that. We could keep the versioning as it is, with manual releases to PyPI. It’s up to @brgl. If we were to keep things as they are, it would be useful to discuss what we envision the trigger to be, in GitHub, to cause wheels to be published to PyPI. GitHub documents the full list of possible events, like a tag being created through GitHub’s web UI or pushed through a git command line. For example I thought that a tag following a name pattern like What do you guys think? |
Indeed it seems that piwheels have already been producing 32-bit arm images for the I haven’t tested those wheels. For a project of mine (Raspberry Pi 2, armv7l), I have been building armv7l wheels using just Docker Desktop, with a # https://hub.docker.com/_/python
FROM python:3.12.0-alpine3.18
# Install dependencies
RUN apk add autoconf autoconf-archive automake bash curl g++ git libtool \
linux-headers make patchelf pkgconfig
RUN pip3 install auditwheel
# libgpiod
WORKDIR /usr/src/libgpiod
ARG BRANCH=v2.0.2
RUN cd /usr/src && git clone --branch $BRANCH --depth 1 --no-tags \
https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git
RUN ./autogen.sh --enable-tools=no --enable-bindings-python && make
RUN cd bindings/python && python setup.py sdist bdist_wheel
# $(uname -m) is either 'armv7l' (32 bits) or 'aarch64' (64 bits)
RUN cd bindings/python/dist && LD_LIBRARY_PATH="/usr/src/libgpiod/lib/.libs" \
auditwheel repair --plat "musllinux_1_2_$(uname -m)" *.whl This builds the wheels while building the Docker image. Then I take the wheels out and discard the image. It may be possible to change it such that the wheels are built when the image is executed with
This might be a reason to prefer building the 32-bit arm images through GitHub Actions as well, if we can make it work through Docker for example, rather than leaving it to piwheels.org. It’s an option. And yes, we would start with |
I can have a go, yes. Advice / suggestions / contributions are also very welcome, of course. 👍 |
Is there any chance of me being able to do it locally on my laptop with the right toolchains installed? |
Yes. I detached the versioning of libgpiod and python bindings for various reasons. There's |
@brgl yes, this is super simple and takes roughly 10 minutes, please see the example in the initial issue description. It does not require any toolchain install for you. The toolchains are in the docker containers used by cibuildwheel. Unless you're asking if this can be done without cibuildwheel. It automates a lot of the steps and is maintained by pypa so has a lot of eyes on its code. In order to generate wheels, youd want to use the manylinux containers they (pypa) publish which have the right mix of toolchains and libraries to conform to the PEP definitions for the wheel labels. You'd build the wheel as usual and then run auditwheel which essentially vendors in runtime libraries and modifes rpath to point to those libraries if i recall correctly. Cibuildwheel just performs all of this for you and iterates through the defined list of architectures and platforms making it easier, so its not technically required I understand the abundance of caution, the whole community is on high alert after Jia Tan's shenanigans in xz and everything is getting extra scrutiny. |
Yeah, I was thinking about something that'd use toolchains from my computer and no containers. I'm currently trying to finish the RFC version of the DBus API I want to send for review. Then I'm leaving for EOSS and coming back to work on April 22nd. I'll try to give cibuildwheel a spin then. |
Thanks for being open to this request. Its a lot to take in but would be extremely helpful to us. I appreciate your and Kent's work on gpiod. I haven't had a good chance to contribute to the project yet, but hope to in the future. |
Why “no containers,” if I may ask? Would you also be opposed to a GitHub Actions workflow that involved containers? GitHub Actions install dependencies in a virtual machine at a minimum, see Standard GitHub-hosted runners for Public repositories. The use of containers by GitHub Actions might potentially be useful to emulate a 32-bit arm architecture for example — to be confirmed. |
I took a couple of hours to work up the CI on my fork. The results can be seen here: https://github.com/vfazio/libgpiod/actions/runs/8654825182 This example workflow triggers on tags named "bindings-python-*" (in case you end up adding other bindings and want to trigger CI builds for those). It checks out the repo from the tag, moves into the bindings directory, and then triggers an sdist build via It builds wheels for x86_64 and aarch64 for both manylinux2014 and manylinux_2_28. Python distributions should detect the highest compatible version and use that when wheels are available (trying 2_28 first, then falling back to 2014) then builds from the sdist if no compatible wheel is found. |
@vfazio, I had got that far too: 🙂
Given that it seems to be easier for you:
And given that the work seems to have become urgent for you:
It may be better that I step out of the way and let you finish it. |
I wouldn't say "urgent". Theres not much I can do since I do not control the pypi repo. But I did have some extra time today after getting off work and wanted to putz around with github actions a bit. Seemed like a good learning opportunity. Bart may still want to avoid using CI to have finer control of what is used to build these. Im guessing there is still some concern about some sort of injection attack. Some of this can be allayed by verifying the containers and the outputs and sticking with explicit, non-moving tags for github actions. Auto bumping utilities like dependabot will likely be avoided as will using major version tags for GHA since they actually silently move. I thought about using hashes, and still can, but thought it was a bit overkill. |
What I had planned to do next was:
However, ending up with competing / overlapping PRs doesn’t sound to me like the best use of time. @vfazio, I now assume that you are going to continue the work. If that’s not the case, let me know. |
Hey @vfazio, have you had a chance to prepare a pull request to move this issue forward? I don’t see related PRs in this repo at this time. If you have found yourself without the spare time, or for any other reason, I could still contribute the PRs mentioned in my previous comment. |
Its less about time and more about respecting Bart's wishes about how he wishes to build and deploy wheels. My initial branch to do this work was a proof of concept to show that it could work. However, Bart has mentioned he would prefer to build these on his own, presumably to minimize the chance of some injection attack. If his thoughts have changed on this, i will be glad to make a PR. I would not, however, plan to build 32bit ARM wheels, and would stick to the more formalized 64bit ARM wheels and whatever bitness of x86 he wants to target. Im not too worried about libc variants. The biggest problem with 32bit ARM wheels is the v6/v7 targets. Piwheels already has built 32bit wheels for RPi so users that want these can use that index. I think most NXP ARM targets are running 64bit userland. |
Ha. You may be referring to the following comments:
I took these comments to mean that Bartosz wanted to understand / learn the process of building wheels on his laptop, but ultimately the wheels would be built through CI automation such as GitHub Actions. @brgl, if you were thinking of building the binary wheels on your laptop and uploading them to GitHub on every release, let me ask you to reconsider, for the following reasons:
|
I'd prefer to keep this as simple and friendly and make the barrier to entry as low as possible. If Bartosz doesn't want to use CI, then it can still be done with cibuildwheel as i've demonstrated. My concern is trying to force the maintainer to do it one way or the other is likely to result in no wheels at all. If accepting that they're not built and published via CI means we at least get wheels, i'm ok with it. Otherwise, if we can remove the concern from CI and containers, then that is also a path we've both proven out. I'm here to help however I can, so please let me know if i can be of further assistance. |
@vfazio @pdcastro I don't want to publish the wheels on github (not sure if using github actions implies that). In fact: I'd prefer to not use github infrastructure at all. I can use |
If this is the case, I think the steps I used in my first post are sufficient as they do not use any github architecture and it depends solely on cibuildwheel and docker. While there are tools that allow you to run github actions locally, there isn't much value in them other than for debugging pipelines to make sure they work before pushing them. If the build process for gpiod was hairier, i could maybe see a use case, but its really no different than writing a script to make sure the proper steps are taken for a release (download correct tarball, unpack, setup venv, install cibuildwheel, run with fixed argument list, publish to pypi) |
@vfazio sounds good, please send the patches to the linux-gpio mailing list once you have them. |
@brgl im not sure theres much of a patch to submit. Do you want me to write and submit some bash script you can run on your machine to perform this build? I cant embed your pypi creds in it. It may make sense to just be a script you have yourself that you use similar to when you publish sdists? |
If there will be a script, I think it should be committed to the repo. Bartosz had mentioned that he uses the setup.py script to create the sdist:
Previously I came across the PyPI credentials would be fetched from outside the script of course, e.g. through environment variables. If the user was supposed to run the Even if a CI won’t be used, my intention is to make the process as automated / automatable as possible. |
That script creates the sdist but doesn't publish it. I can write some small script and place it in that directory that generates artifacts for publishing. This would include generating the sdist as well since thats an input for wheels. Ill leave the publishing omitted since thats not currently handled by similar scripts. There will be some additional dependencies that will need to be met on the machine building the wheels:
I don't know what host distro you generally use, but i can do some distro agnostic checks for these before kicking off a build. If I have time, ill try to get this done this week. |
I work on ubuntu. |
@brgl I have something that "works" if you want to give it a spin here: vfazio@1e019bd I'll try to do more testing and submit it to the mailing list in the next few days. |
Thanks, I will give it a try. |
@vfazio this went suprisingly well right from the start:
Do these wheels support glibc too? |
@vfazio Let's say we get this upstream and make a new release of the python bindings. I would typically do |
The manylinux wheels should cover glibc. You can create a virtualenv, |
You would point your twine command to the
While I have the script building wheels for musl libc installations as well, I do not have a great way of testing those. I don't have a board on Alpine I can conveniently test with... Though maybe i can find a bootable live cd somewhere and run it through QEMU. I plan to test the aarch64 glibc wheels on rpi today. |
And how will the user running |
This is a function of how pip works. More information here: https://packaging.pypa.io/en/stable/tags.html When you One of the reasons I suggested pushing to the test pypi server was to make sure this worked as expected: https://packaging.python.org/en/latest/guides/using-testpypi/ |
Thank you. I like it, it looks good and works fine. I would like to make it part of libgpiod. |
Awesome! I will do a few minor tweaks to comments and then do some verification on the generated wheels to make sure they function as expected (unless you've had a chance to test those out) and then submit the patch. |
@brgl I've tested these wheels a bit on CPython 3.12 on GLibc and Musl libc based systems. Build steps:
I booted an RPi CM3+ on an IO board with Alpine Linux:
scp file over
Show the package works by running a couple of copied examples from the libgpiod repo. For this test, I jumpered gpio 40 to gpio 41.
I reused the same setup with a custom Debian based system as well to exercise the manylinux/glibc wheel:
|
Awesome, LGTM. I planned on doing a new python release anyway. |
I submitted this and i see it reflected in a few mailing list mirrors https://www.spinics.net/lists/linux-gpio/msg100124.html but i don't see it in the linaro patchwork instance. It apparently forgot to CC myself so I wasn't sure if it actually went through. |
v2 submitted https://www.spinics.net/lists/linux-gpio/msg100346.html. This time i was less dumb in the submission though i was dumb in my reply cuz it was html formatted and bounced off the list. I didn't necessarily want to resend it and spam you however. |
@brgl are there guidelines for contributions? I looked at CONTRIBUTING.md and I didn't see any guidelines on |
I need to add these points to the README, thanks for bringing this up. |
I have no idea what license to use for this script, I assume GPLv2 is fine to stay in line with the rest of the repo? Looks like that's what |
Yes, GPL-2.0 works fine |
Thanks for the direction! I submitted: https://www.spinics.net/lists/linux-gpio/msg100479.html |
Thanks, I applied it. So next step: new Python release and this time I'll upload the wheels next to the source dist. |
I'll try to test the wheels once i see them published |
Can we close this now? |
Closing as resolved as part of the 2.2.0 release. Hopefully future releases follow the same pattern. If other platforms need support, a separate issue can be created. |
Yes, definitely. Now that this has been exercised, I'll just follow the same procedure every time. |
There are a number of reasons to provide precompiled wheels:
Building wheels on the command line or via pipeline is generally pretty straightforward. There is a utility called cibuildwheel that accomplishes most of this by using containers with very specific combinations of toolchains and utilities to build libraries that will work across distributions, generally rooted off of the major glibc version available.
From the command line, with an sdist:
This build used the default target images for building wheels, which is probably fine as they are not EOL (see https://github.com/pypa/manylinux). The one suggestion I may make is to possibly generate manylinux_2_28 wheels as the OS used to generate manylinux2014 will technically be EOL in June though may still receive support from the provided docker images beyond it's EOL date.
Having prebuilt wheels will likely also increase package adoption. I'm currently stuck on the old pure-python implementation (and thus the v1 kernel interface) until wheels are provided unless I package and deploy them to my own internal server.
The text was updated successfully, but these errors were encountered: