theme | class | background | highlighter | lineNumbers | favicon | info | drawings | download | transition | title | mdc | fonts | layout | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
seriph |
text-center |
none |
shikiji |
false |
/quackers.png |
Hacker School 23/24 - Docker
|
|
true |
fade |
Hackerschool: Docker |
true |
|
center |
https://nushackers.github.io/docker_slides
Hi I'm Chun Yu! My GitHub is: https://github.com/gunbux/
- I'm a Y3 CS undergraduate that enjoys hacking and building systems
- I play the violin (sometimes)
- Pre-installed docker (if you haven't installed, sound out now!)
- Some basic knowledge of the command line
- Motivations behind docker
- Concept 1: Docker basics and terminology
- Concept 2: The Docker Hub
- Concept 3: Connecting Volumes and Networks
- Concept 4: Writing your own Dockerfile
- Concept 5: Connecting Containers with Docker Compose
::right::
- Advanced Docker Concepts
- Orchestration tools like Kubernetes or Ansible
- Docker as a secure development environment
Docker is a piece of software that allows us to run applications in isolated environments known as containers.
Consider an application that requires a certain version of Node, or Python to run a certain application.
- Well, what if you want to get a team to work on that project? That would mean that you would need a large amount of setup time to get them ready for development.
- What if you want to deploy your application to multiple servers in production?
You can think of Docker as a box or package which encapsulates everything your application needs to run.
In other words, we want to be able to write some configuration files to determine our environment/infrastructure we run our programs in.
Don't virtual machines kind of solve the problem docker aims to solve as well?
Answer: We can think of docker containers as computationally cheaper virtual machines.
::right::
See more here:
https://www.cloudbees.com/blog/why-docker#docker-enables-consistent-environments
- Docker Images are read-only templates or blueprints for containers
- You can think of it like a published recipe for a cake.
- You can't modify the recipe because it has been published
- You don't actually make the cake
::right::
- Containers are runnable instances of images
- With the same example, you can think of it as making a cake with the recipe.
- You are executing the sequence of steps in the recipe to make this cake
Now, lets check that docker is working on our machines:
docker run hello-world
Let's very quickly dissect this command:
docker run hello-world
docker
: the main command line program we are runningrun
: a subcommand that tells docker to run a programhello-world
: the name of a special image we want to run
We can use this command to "emulate" an Ubuntu environment
docker run -it ubuntu /bin/bash
i
: interactive modet
: tty (you don't need to know too much, just run this every time you want to run interactively!)/bin/bash
: the path to the terminal command you want to run
We can use it to run different versions of python
docker run -it python:3.8 python
docker run -it python:2.7 python
Notice how when we first call docker run
on an image, we get this message:
Unable to find image <image-name> locally
This is because docker run
is actually a combination of actions:
- Check if we have the image
- If we do not have the image locally, run
docker pull
to get the image (Think of it like installing an executable) - Run
docker run
on the installed image (Think of it like running the executable)
To view the docker containers running at any point of time:
docker ps
We can use some flags to make the output more verbose:
docker ps -a
::right::
You can find out more about the different flags here!
We've already seen different modes to run a docker container. There are a few ways you can run a docker container:
- Default without flags: this will run the docker container with your terminal attached to the container.
- Interactive: Launch a program from the docker container
- Detached: Running
docker run
with a-d
flag will detach the container from your terminal.- How do we check the logs for detached processes?
docker run prakhar1989/static-site // Default
docker run --rm -it prakhar1989/static-site // Interactive
docker run -d -P --name static-site prakhar1989/static-site // Detached
--rm
: remove the container when it exits--name
: give the container a name-P
: publish any open ports of the container to a random port on our host machine
Now let’s check:
docker port static-site // Notice we use the NAME of the container
Then, lets head to localhost:<port>
to see our running website running in a docker container
To stop a container:
docker stop <container-name>
docker rm <container-name>
Why are there two commands? Shouldn’t they do the same thing?
docker stop
only stops the container (think of it like pausing the application), but the state is still saved (an exited program stored in memory still). You can verify this by doingdocker ps -a
.docker rm
will then remove this from ‘memory’
To remove an image:
docker rmi <image_name>
// Test out if docker is working
docker run hello-world
// Emulating an ubuntu environment
docker run -it ubuntu /bin/bash
// Running different versions of the same application
docker run -it python:3.8 python
docker run -it python:2.7 python
// View running containers
docker ps
docker ps -a
::right::
// Different ways to run a container
docker run prakhar1989/static-site
docker run --rm -it prakhar1989/static-site
docker run -d -P --name static-site prakhar1989/static-site
// Finding published ports
docker port static-site
// Stopping and removing a container
docker stop <container-name>
docker rm <container-name>
// Removing an image
docker rmi <image-name>
The Docker Hub is a central repository of images we can use.
To view the Docker Hub, go to hub.docker.com
::right::
Whenever we pull an ubuntu or python image, we’re getting the image from the docker hub. So when I do a command like:
docker pull/docker run <image-name>
Docker will search the docker registry for any images that matches that name.
::right::
There are two main naming conventions for images:
- Official Images
python
,ubuntu
,nginx
- Community Images (Not the actual name for these)
- These are named
<username>/<image-tag>
- These are named
Let's take a look at the python image on the dockerhub:
https://hub.docker.com/_/python
::right::
What is the difference between different versions?
docker run -it python:3.8 python
docker run -it python:3.8-slim python
docker run -it python:3.8-alpine python
docker images // Use this command to check!
Look at the tabs page of the docker hub:
https://hub.docker.com/_/python/tags
Notice how there are different architectures being built for the docker image?
::right::
By default, Docker will always try to find and use the architecture native to your machine.
This might not be the best since there might be slight differences between your native architecture (M1 Mac) vs where you want to run your program (x86).
To manually configure this, use
--platform
docker run --platform linux/amd64 python:3.13
docker run --platform linux/arm64 python:3.13
Let's explore some fun docker images we can find on docker hub!
docker run -it xer0x/spaceinvaders // Space Invaders on your terminal!
docker run -it spkane/starwars:latest // Watch Star Wars on your terminal
// Pull a docker image
docker pull/docker run <image-name>
// Different image tags
docker run -it python:3.8 python
docker run -it python:3.8-slim python
docker run -it python:3.8-alpine python
// See instaalled images
docker images
// Specifying specific architecture
docker run --platform linux/amd64 python:3.13
// Fun Docker Images
docker run xer0x/spaceinvaders
docker run -it spkane/starwars:latest
What if we have more complex docker images, that download a file, or needs to be exposed to a network?
Volumes are basically how you share storage between the host and the container. There are two ways we can do this, either when building the image or when running the image.
We can map volumes with the following flag in our docker run
:
-v <host directory location>:<docker directory location>
Here's a really cool docker container that allows us to download youtube videos with a UI.
Given that we know the docker directory location is /downloads
, map our volumes so we can also see the downloaded vidoes on our computer as well.
You can visit the site at
localhost:8081
docker run -d -p 8081:8081 -v <fill in the blanks here> ghcr.io/alexta69/metube
-v
:<host location to map>:<location inside the container>
-p
: we'll find out soon!- ghcr.io : github’s version of the docker hub
Now you may notice that we like to pass in a -p
or a -P
argument in a lot of the examples, and we have kind of glossed through what it does.
When you think about how your computer connects to a website, it is establishing a connection to another computer somewhere in the world. But what you may not know is that it is also connected to a certain port of a computer in the world.
This concept carries over in docker, where our containers have their own ports, and docker allows us to connect one of our host ports to one of the containers ports.
-p <host port of our machine>:<container's port>
-P
is a special command where docker automatically tries to find ports used by the docker container and maps it to a random available port of your host machine
If we look at our previous command:
docker run -d -p 8080:8081 -v ... ghcr.io/alexta69/metube
-p 8080:8081
: map the host port 8080 to our container port 8081
Here's another cool docker image that allows us to play manipulate PDFs.
Can you figure out how to map this Docker container such that we can use it?
This docker container uses the port
8080
// Sterling PDF:
docker run -d \
-p <fill in the blanks here> \
-e DOCKER_ENABLE_SECURITY=false \
--name stirling-pdf \
frooodle/s-pdf:latest
-e
: Set an environment variable
In the meantime, try out some of these cool docker containers!
themythologist/monkeytype
docker run -p 5000:5000 -d themythologist/monkeytype:frontend-latest
code-server
docker run -d \
--name=code-server \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-P \
lscr.io/linuxserver/code-server:latest
What if we want to create our own images? A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image. It's a simple way to automate the image creation process.
FROM python:3
RUN pip3 install pycryptodome requests
COPY wasg-register.py /
ENTRYPOINT ["python3", "wasg-register.py"]
FROM
: Takes a base imageRUN
: Run a commandCOPY
: Copy a file from our host to the containerENTRYPOINT
: Command to start the ‘application’, this is equivalent to runningpython3 wasg-register.py
::right::
Here's a really cool python script that automates getting credentials for Wireless@SG without download the app.
https://github.com/zerotypic/wasg-register/blob/master/wasg-register.py
Unfortunately, it required us to install some dependencies for it to run, which makes it difficult to share the program without others.
We instead share a Dockerfile to build an image that can be used anywhere!
docker build -t <name> .
docker run <name>
FROM golang
MAINTAINER Drew Miller <drew@joyent.com>
ENV GOPATH=/go
RUN go get -u github.com/asib/spaceinvaders && \
cd $GOPATH/src/github.com/asib/spaceinvaders && \
go build
CMD /go/bin/spaceinvaders
::right::
Here's another Dockerfile, this time it's the Dockerfile of the spaceinvader docker we played!
ENV
: Set environment variables&& \
: Run multiple commands + continue to the next lineCMD
: A lot of people how to use this with entrypoint - officially you should “start” the application withENTRYPOINT
and pass commands to the program withCMD
, but a lot of people also just start the docker application withCMD
Now let's try practicing writing our own Dockerfile!
We have a simple flask app ready to be deployed, get it here:
You might want to take a look at metube's Dockerfile and try and find out how they made things work: https://github.com/alexta69/metube/blob/master/Dockerfile Also look at the docs! https://docs.docker.com/engine/reference/builder/
::right::
Here's what we want to do:
- Require python to run
- Install the required python packages with
pip3 install --no-cache-dir -r requirements.txt
- Expose the port 5000 so that we can use it (Hint: use
EXPOSE <port>
to expose a port) - Run the command
python3 app.py
to start the application
- Require python to run
- Install the required python packages with
pip3 install --no-cache-dir -r requirements.txt
- Expose the port 5000 so that we can use it (Hint: use
EXPOSE <port>
to expose a port) - Run the command
python3 app.py
to start the application
::right::
FROM python:3.8
# copy all the files to the container
COPY . /app
# Set a working directory to /app
WORKDIR /app
# install dependencies
RUN pip install --no-cache-dir \
-r requirements.txt
# define the port number
# the container should expose
EXPOSE 5000
# run the command
CMD ["python", "./app.py"]
We saw a really cool docker image just now:
https://github.com/linuxserver/docker-code-server
The problem? I don't want to type out or remember this command every time I want to start this container!
::right::
docker run -d \
--name=code-server \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-p 8443:8443 \
lscr.io/linuxserver/code-server:latest
This is what Docker Compose aims to address, as well as a bunch of other different issues we have with plain docker.
- What if we want to run multiple containers together? I don’t want to copy and paste 10 different commands!
- What if I want the docker containers to communicate with each other? (We won’t cover this in too much detail, but it’s possible for this to work!)
On top of our Dockerfile/images, docker compose aims to let us write in a docker-compose.yml
how we want to run the docker containers,
and also allows us to spin up multiple containers at once!
There are three main sections to a docker-compose.yml:
- Version - Use v3 latest kind of docker compose file
- Services - Define your Docker containers
- Define your docker volumes
::right::
version: "3"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: es
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata1:/usr/share/elasticsearch/data
web:
image: yourusername/foodtrucks-web
command: python3 app.py
depends_on:
- es
ports:
- 5000:5000
volumes:
- ./flask-app:/opt/flask-app
volumes:
esdata1:
driver: local
The services section contains:
image
: indicates the image to start the service withcommand
: overrides the CMD in dockerfileenvironment
: setting environment variablesports
: carries out port forwardingvolumes
: can be bind mount or dedicated volumedepends_on
: tells docker the sequence to start the services
::right::
version: "3"
services:
es:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
container_name: es
environment:
- discovery.type=single-node
ports:
- 9200:9200
volumes:
- esdata1:/usr/share/elasticsearch/data
web:
image: yourusername/foodtrucks-web
command: python3 app.py
depends_on:
- es
ports:
- 5000:5000
volumes:
- ./flask-app:/opt/flask-app
volumes:
esdata1:
driver: local
Let's try rewriting this code-server docker command into a docker-compose.yml!
::right::
docker run -d \
--name=code-server \
-e PUID=1000 \
-e PGID=1000 \
-e TZ=Etc/UTC \
-P \
lscr.io/linuxserver/code-server:latest
version: 3.0
services:
code-server:
image: lscr.io/linuxserver/code-server:latest
container_name: code-server
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
ports:
- 8443:8443
restart: unless-stopped
docker compose up
docker compose down
docker compose up -d --force-recreate
Here's a partial docker-compose.yml:
services:
website:
build:
context: .
dockerfile: website/Dockerfile.dev
args:
- GIT_COMMIT_HASH
- DATA_API_BASE_URL
volumes:
- ~/.cache/yarn:/home/node/.cache/yarn
- ./website:/home/node/app/website
- ./packages:/home/node/app/packages
- ./.prettierrc.js:/home/node/app/.prettierrc.js
expose:
- 8080
- 8081
environment:
- NODE_ENV=development
user: ${CURRENT_UID:-1000:1000}
restart: on-failure
::right::
build
: Instead of pulling an image from docker hub, build using a directory with a Dockerfile
This is actually a portion of the docker-compose.yml
used by NUSMods:
https://github.com/nusmodifications/nusmods/blob/master/docker-compose.yml
With this knowledge, you can probably deploy the entire NUSMods website locally using Docker Compose!
- Notice how we have three different services. This means that there are three docker containers required to run NUSMods locally
restart: on-failure
is another special feature of docker compose that allows us to automatically restart the container if something fails abruptly
Let's try running it!
docker compose -f docker-compose.yml build
docker compose -f docker-compose.yml up
Remember that docker images are similar to applications. In otherwords, these images take up space in your computer if you don't
clear them up. If you no longer need the docker images, besides doing docker rmi
, you can clean your entire system of images by doing:
docker systems prune -a
- Docker is a containerization platform for developing, shipping, and running applications.
- Dockerfiles are scripts containing commands to build Docker images.
- Docker Compose allows for defining and managing multi-container Docker applications via a simple YAML file.
- Docker Curriculum: https://docker-curriculum.com/
- Really cool docker compose files: https://github.com/Haxxnet/Compose-Examples
- Official Docker Cheatsheet: https://docs.docker.com/get-started/docker_cheatsheet.pdf
Let us know your thoughts on this workshop!