Note
We move from one way of doing thing to another because of inconvenience. This is evolution.
A computer is essentially 3 components-- CPU, RAM & Disk. We cannot utilize this baremetal directly, so we need an abstraction layer for this.
Operating System provides 'Kernel' that acts as the abstraction layer for CPU, RAM & Disk. Operating System manages resource allocations for them.
Hence, an application runs inside Operating System because through kernel it can access computing & storage resources, and Operating System can ensure appropriate amount of resource allocation for an application.
To run an application isolated from local enviornment, it's run in a dedicated server.
Traditionally, 1 server could only handle 1 Operating system, therefore it could deploy only 1 application.
Vmware came out with Virtual Machine, with which, 1 server could now handle multiple Operating System, therefore it can deploy multiple applications.
Q. Operating System manages resource allocation and there is only 1 server, so do these multiple Operating System communicate with each other for allocating resource properly?
No.
- The physical resources of 1 server are divided virtually for each Operating System.
- This is managed by HyperVisor.
- Operating System can only allocate the resource to application which are allocated to it via hypervisor.
However, the limited amount of physical resource a server has, had to be divided amongst multiple Operating System, which was still very problematic for applications to run at scale.
Containers came into the picture which enabled running multiple applications isolated-ly in the same Operating System.
Docker: a company that made linux containers mainstream.
-
Dockerfile
is template for building docker application, like a recipe for a dish. -
Docker image is like a pre-built binary. When you will run it, it will run the application. Therefore, you can say that Docker application is running instance of Docker image
-
Docker application runs inside container, so whenever a container is running; it means some application is running via Docker.
So in hindsight,
not literally, but loosely.
docker pull <image_name>
downloads the image from DockerHub.
Additional You can specify version for your image via tag.
docker run <image_name>:<tag>
docker run <image_name>
Note
If the image is not present locally then it will be pulled from dockerhub.
docker ps
[!IMPORANT] We get to see
container id
of all running images from this command which will come handy in running container-specific commands.
docker ps -a
docker stop <container_id>
docker rm <container_id>
docker container prune -f
docker rmi <image_name> -f
docker run <image_name> <application_command>
Example:
$ docker run ubuntu echo heyy heyy
docker run -it <image_name>
docker run -d <image_name>
- stopped containers
- dangling images
- unused networks
docker system prune
- also removes unused volumes
docker system prune -a
docker log
provides data that I don't understand.
docker log --since <timestamp>
Important
Run this before starting docker (for ubuntu 24 lts):
sudo sysctl -w kernel.apparmor_restrict_unprivileged_userns=0