Skip to content
Rob Ballantyne edited this page Aug 5, 2024 · 9 revisions

About AI-Dock

AI-Dock is an effort to containerise popular GPU-based AI & Machine Learning applications for use in the cloud. The goal is to enable users who may not be able to afford the hardware required to run such applications to avoid being 'left behind' as technology advances.

The preferred target environments are container-first cloud providers where it may not be possible to run more than one container per GPU. These providers are commonly less expensive than traditional cloud services but the trade-off is that it's necessary to bundle several applications inside a single container image.

While this approach may not fit well with the Docker principle of 'one container per process', it is perfectly reasonable given the nature of our target platforms.

Running Containers

AI-Dock containers are primarily built for use at container-first cloud providers such as Vast.ai which is a large open marketplace for GPU enabled hardware. A Docker compose file is included in each repository to assist with running in a local environment.

Documentation

All AI-Dock containers extend this (base-image) container so many of the configurable options will be the same regardless of which bundled application you are running. Those shared options will be documented in this wiki. Options which are specific to the bundled application(s) will be documented in the README.md file for that particular container.

Security

These containers are interactive and will not drop root privileges. You should ensure that your docker daemon runs as an unprivileged user.

A normal system user will be created at runtime. This should be considered a convenience rather than a security measure. The user has unrestricted access to sudo.

All exposed web services are protected by a reverse proxy with authentication enabled by default to prevent port scanning applications finding and abusing the services. You should use HTTPS wherever possible.