-
-
Notifications
You must be signed in to change notification settings - Fork 6
5.0 Building the Image
Docker images are built automatically through a GitHub Actions workflow and hosted at the GitHub Container Registry.
An incremental build process is used to avoid needing a huge cache - The following images are generally used to provide functionality:
- nvidia/cuda / ubuntu ↴
- ai-dock/base-image ↴
- ai-dock/python ↴
- ai-dock/pytorch / ai-dock/jupyter-pytorch ↴
- ai-dock/[application-name]
Note
Wherever an image does not follow this structure it will be noted in the image repository.
You can self-build from source by editing docker-compose.yaml
or .env
and running docker compose build
.
It is a good idea to leave the source tree alone and place any new files you would like to add or override into build/COPY_ROOT_EXTRA/...
. The structure within this directory will be overlayed on /
at the end of the build process.
As this overlaying happens after the main build, it is easy to add extra files such as ML models and datasets to your images. You will also be able to rebuild quickly if your file overrides are made here.
Any directories and files that you add into opt/storage
will be made available in the running container at $WORKSPACE/storage
.
This directory is monitored by inotifywait
. Any items appearing in this directory will be automatically linked to the application directories as defined in /opt/ai-dock/storage_monitor/etc/mappings.sh
. This is particularly useful if you need to run several applications that each need to make use of the stored files.
Note
Building an image with FROM
notation is supported, but not documented here as it is an established standard.
Warning
Please be mindful of the software license - These images are free for personal and commercial use but some restrictions apply.