To setup the whole thing:
apt install docker.io git make m4 acl
useradd -Um -s /bin/bash buildbot
usermod -aG docker buildbot
apt install \ python3-autobahn \ python3-cryptography \ python3-dateutil \ python3-docker \ python3-future \ python3-jinja2 \ python3-jwt \ python3-migrate \ python3-sqlalchemy \ python3-twisted \ python3-venv \ python3-yaml
su - buildbot
python3 -m venv --system-site-packages buildbot-master
. buildbot-master/bin/activate
git clone repo
cd dockerized-bb
cp buildbot-config/config.py.example buildbot-config/config.py
and edit config.py to fit your needs- If you don't want to build all toolchains and all workers,
cp Makefile.user.example Makefile.user
and edit Makefile.user to fit your needs (example configuration downloads all workers and doesn't build anything) make master
make workers
cp contrib/buildbot.service /etc/systemd/system/buildbot.service
and adapt paths and user- As root:
systemctl daemon-reload && systemctl enable buildbot && systemctl start buildbot
Buildbot master is located in buildbot-config
and spread across several files:
steps.py
: defines all custom steps needed by ScummVMconfig.py
: contains configuration which directly depends on the local setup and administrator choices,builds.py
: configures all the build steps for each project. There si one class for all projects variant (ScummVM, ScummVM tools) and each variant (master, stable, whatever) is registered using the previous class,platforms.py
: describes all the platforms specific configurations. These can be specialized depending on build run.workers.py
: defines the various workers types used by BuildBot. There are currently 2 of them: fetcher and builder. While fetcher is responsible to fetch and trigger actual builds, builder is instantiated based on the build platform.ui.py
: contains all user interface related configurations.master.cfg
: the file loaded by BuildBot and defines theBuildmasterConfig
object based on the other files.
New platforms get defined in platforms.py
.
New projects (GSoC for example) are added to builds.py
.
Workers get started on demand and stopped when they are not needed anymore. That avoids idling containers taking memory and CPU cycles. They use a local network created at buildbot startup.
All the data that gets generated by build processes is stored in buildbot-data
at the root of the repository. There is the source, build objects, packages and ccache directory. All of this is created at buildbot startup.
Android build system stores downloaded file there as weel to avoid downloading them at every build.
Docker images are started readonly to avoid storing modifications and to have reproductible build process.
make master
installs buildbot and generates a buildbot.tac
file in directory specified by BUILDBOT_BASEDIR
in Makefile.
Workers are using Docker to run. All the workers images are defined through Dockerfiles in workers
directory.
Dockerfile if ending with .m4
extension is preprocessed using the GNU m4 preprocessor.
While its syntax is quite... oldish, it doesn't clash with Dockerfile one like C preprocessor does and it's heavily available.
Each worker has its image data located in its own directory. So debian-x86-64
data resides in workers/debian-x86-64
directory.
M4 Dockerfiles include parts from common directory to avoid reapeating over and over the same instructions.
This lets more latitude to create images than creating a base image with all buildbot tools and deriving from it.
To create a worker for a new platform, one should create a new toolchain with everything ready in it. This lets split between toolchain creation process and instanciation with buildbot tools for ScummVM use. To be compliant with Makefile rules, worker need a toolchain with the same name or no toolchain at all. You don't need a toolchain if the worker can easily pull all libraries it needs directly from repositories (like for the Debian platforms).
To create a custom worker, Dockerfile should:
- create a new image with the same base that the toolchain one (to have matching host libraries),
- install buildbot in it (if base image is debian, you can use
debian-builder-base.m4
), - define
HOST
andPREFIX
environment variables, - copy the
PREFIX
directory from the toolchain, - define the same environment as in the toolchain (the PATH can be adjusted to make build process more easy),
- finish buildbot configuration (using
run-buildbot.m4
).
make workers
just runs docker build
on every directory in workers
using GNU m4 when needed. It handles files modifications and toolchain dependency.
It also handles user preferences and downloads workers when build isn't desirable.
A toolchain is a collection of compiler, binutils and libraries needed to compile ScummVM. They are installed in a specified prefix and shouldn't polluate the image filesystem.
There is one common image toolchains/common
to help generating toolchains. It just contains the scripts and no operating system.
Toolchains generation images just copy files from this base image when needed (like in windows-x86_64
).
When a custom toolchain has to be built, Dockerfile should:
- define
HOST
andPREFIX
environment variables, - build or install a compiler, a libc and binutils at the prefix place,
- define environment with all new installed binaries,
- install prebuilt libraries if available,
- copy missing libraries build rules from
common/toolchain
or local build context, - run the rules.
make toolchains
just runs docker build
on every directory in toolchains
using GNU m4 when needed.
It also handles user preferences and downloads toolchains when build isn't desirable.
Apple toolchains need SDKs which aren't publicly available.
The toolchain apple-sdks
extracts them from Xcode packages.
For this you need an AppleID account and download Xcode packages needed by the toolchain.
They must be placed in toolchains/apple-sdks
directory.
In some (if not all) versions of Docker, if the files are larger than 4G, building of image will fail.
You can use the command split -b2G toolchains/apple-sdks/<filename>.xip toolchains/apple-sdks/<filename>.xip.
to make small parts of it.
File will be joined back in the building process.
Many parts of this repository come from Colin Snover's work at https://github.com/csnover/scummvm-buildbot.
Thanks to him.
There are still many things to do:
- create all platforms images and add them to master configuration,
- other things I must have forgot...