This scenario shows how to create K8s cluster on virtual PC (multipass, kubeadm, containerd)
When creating K8s cluster with Ubuntu and Windows, please use Ubuntu 20.04, and Windows 2019 Server. Because tested, and stable run on Ubuntu 20.04 and Windows 2019 Server.
Easy way to create K8s Cluster with Ubuntu20.04 (Control-Plane, Workers) and Windows 2019 Server:
- Ubuntu 20.04 Installation Files (updated: K8s 1.26.2, calico 3.25.0, containerd 1.6.10) without using Corporate Proxy:
- Windows 2019 Server Installation Files (K8s 1.23.5, calico 3.25.0, docker as container runtime) without using Corporate Proxy:
IMPORTANT:
- If your cluster is behind the corporate proxy, you should add proxy settings on Environment Variables, Docker Config, Containerd Config.
- Links in the script files might change in time (e.g. Calico updated their links)
- Important Notes from K8s:
- K8s on Windows: https://kubernetes.io/docs/concepts/windows/intro/
- Supported Versions: https://kubernetes.io/docs/concepts/windows/intro/#windows-os-version-support
- Creating Cluster With Kubeadm, Containerd
- Joining New K8s Worker Node to Existing Cluster
- IP address changes in Kubernetes Master Node
- Removing the Worker Node from Cluster
- Installing Docker on Existing Cluster & Starting of Running Local Registry for Storing Local Image
- Pulling Image from Docker Local Registry and Configure Containerd
- NFS Server Connection for Persistent Volume
- "Multipass is a mini-cloud on your workstation using native hypervisors of all the supported plaforms (Windows, macOS and Linux)"
- Fast to install and to use.
- Link: https://multipass.run/
# creating master, worker1
# -c => cpu, -m => memory, -d => disk space
multipass launch --name master -c 2 -m 2G -d 10G
multipass launch --name worker1 -c 2 -m 2G -d 10G
# get shell on master
multipass shell master
# get shell on worker1
multipass shell worker1
- Run on ALL nodes:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
- Run on ALL nodes:
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
- Run on ALL nodes:
sudo sysctl --system
- Close swaps on the OS. Because it is required if you run on directly OS (on-premise)(instead of running on VM)
sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab
- If you install your cluster behind the proxy, you should define http_proxy, https_proxy, ftp_proxy and no_proxy environment variables on /etc/environment.
- You should add ::6443 and Master Node IP.
export no_proxy="192.168.*.*, ::6443, <yourMasterIP>:6443, 172.24.*.*, 172.25.*.*, 10.*.*.*, localhost, 127.0.0.1"
- Run on ALL nodes:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
- Run on ALL nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
- Run on ALL nodes:
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
- Run on ALL nodes:
sudo sysctl --system
- Run on ALL nodes:
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install containerd -y
sudo mkdir -p /etc/containerd
sudo su -
containerd config default | tee /etc/containerd/config.toml
exit
sudo systemctl restart containerd
- Run on ALL nodes:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
- Run on ALL nodes:
sudo kubeadm config images pull
- From worker1, ping the master to learn IP of master.
ping master
- Run on Master:
sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<ip> --control-plane-endpoint=<ip>
# sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=172.31.45.74 --control-plane-endpoint=172.31.45.74
- After kubeadm init command, master node responses back the followings:
- On the Master node run:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- On the worker node, run to join cluster (tokens are different in your case, please look at the kubeadm init respond):
sudo kubeadm join 172.31.45.74:6443 --token w7nntd.7t6qg4cd418wzkup \
--discovery-token-ca-cert-hash sha256:1f03886e5a28fb9716e01794b4a01144f362bf431220f15ca98bed2f5a44e91b
- If it is required to create another master node, copy the control plane line (tokens are different in your case, please look at the kubeadm init respond):
sudo kubeadm join 172.31.45.74:6443 --token w7nntd.7t6qg4cd418wzkup \
--discovery-token-ca-cert-hash sha256:1f03886e5a28fb9716e01794b4a01144f362bf431220f15ca98bed2f5a44e91b \
--control-plane
- On Master node:
- Calico is used for network plugin on K8s. Others (flannel, weave) could be also used.
- Run only on Master, in our examples, we are using Calico instead of Flannel:
- Calico:
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
- Flannel:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- After running network implementation, nodes are now ready. Only Master node is used to get information about the cluster.
- Instead of running it as above, you should run Calico with this way, run on Master node:
# Download Calico CNI
curl https://docs.projectcalico.org/manifests/calico.yaml > calico.yaml
# Apply Calico CNI
kubectl apply -f ./calico.yaml
Run on the Master Node:
# required to add windows node
sudo -i
cd /usr/local/bin/
curl -o calicoctl -O -L "https://github.com/projectcalico/calicoctl/releases/download/v3.19.1/calicoctl"
chmod +x calicoctl
exit
# Disable "IPinIP":
calicoctl get ipPool default-ipv4-ippool -o yaml > ippool.yaml
nano ippool.yaml # set ipipmode: Never
calicoctl apply -f ippool.yaml
kubectl get felixconfigurations.crd.projectcalico.org default -o yaml -n kube-system > felixconfig.yaml
nano felixconfig.yaml #Set: "ipipEnabled: false"
kubectl apply -f felixconfig.yaml
# This is required to prevent Linux nodes from borrowing IP addresses from Windows nodes:"
calicoctl ipam configure --strictaffinity=true
sudo reboot
kubectl cluster-info
kubectl get nodes -o wide
ssh <username>@<WindowsIP> 'mkdir c:\k'
scp -r $HOME/.kube/config <username>@<WindowsIP>:/k/ # send to Win PC from master node, while installing calico, it is required
- Kubernetes requires a minimum Windows-2019 Server (https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/)
- Run-on the PowerShell with administration privilege on the Windows nodes:
New-NetFireWallRule -DisplayName "Allow All Traffic" -Direction OutBound -Action Allow
New-NetFireWallRule -DisplayName "Allow All Traffic" -Direction InBound -Action Allow
Install-WindowsFeature -Name containers # install docker
Restart-Computer -Force
.\install-docker-ce.ps1
Set-Service -Name docker -StartupType 'Automatic'
#Install additional Windows networking components
Install-WindowsFeature RemoteAccess
Install-WindowsFeature RSAT-RemoteAccess-PowerShell
Install-WindowsFeature Routing
Restart-Computer -Force
Install-RemoteAccess -VpnType RoutingOnly
Set-Service -Name RemoteAccess -StartupType 'Automatic'
start-service RemoteAccess
# Install Calico
mkdir c:\k
#Copy the Kubernetes kubeconfig file from the master node (default, Location $HOME/.kube/config), to c:\k\config.
Invoke-WebRequest https://docs.projectcalico.org/scripts/install-calico-windows.ps1 -OutFile c:\install-calico-windows.ps1
c:\install-calico-windows.ps1 -KubeVersion 1.23.5
#Verify that the Calico services are running.
Get-Service -Name CalicoNode
Get-Service -Name CalicoFelix
#Install and start kubelet/kube-proxy service. Execute following PowerShell script/commands.
C:\CalicoWindows\kubernetes\install-kube-services.ps1
Start-Service -Name kubelet
Start-Service -Name kube-proxy
#Copy kubectl.exe, kubeadm.etc to the folder below which is on the path:
cp C:\k\*.exe C:\Users\<username>\AppData\Local\Microsoft\WindowsApps
#Test Win node#####################################
#List all cluster nodes
kubectl get nodes -o wide
[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://<ProxyIP>:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("HTTPS_PROXY", "http://<ProxyIP>:3128", [EnvironmentVariableTarget]::Machine)
[Environment]::SetEnvironmentVariable("NO_PROXY", "192.168.*.*, ::6443, <MasterNodeIP>:6443, 172.24.*.*, 172.25.*.*, 10.*.*.*, localhost, 127.0.0.1, 0.0.0.0/8", [EnvironmentVariableTarget]::Machine)
Restart-Service docker
- Create win-webserver.yaml file for testing of Win Node, run on the Windows2019, details: https://kubernetes.io/docs/setup/production-environment/windows/user-guide-windows-containers/
- Ref: https://github.com/gary-RR/my_YouTube_Kuberenetes_Hybird/blob/main/Setting-ThingsUp-On-Windows-Server.sh
- If we lose the token and token CA cert dash and API server address, wé need to learn them to join a new node into the cluster.
- We are adding new node to existing cluster above. We need to get join token, discovery token CA cert hash, API server advertise address. After getting info, we'll create join command for each nodes.
- Run on Master to get certificate and token information:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
kubeadm token list
kubectl cluster-info
- In this example, token TTL has 3 hours left (normally, token expires in 24 hours). So we don't need to create new token.
- If the token is expired, generate a new one with the command:
sudo kubeadm token create
kubeadm token list
- Create join command for worker nodes:
kubeadm join \
<control-plane-host>:<control-plane-port> \
--token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
- In our case, we run the following command on both workers (worker2, worker3):
sudo kubeadm join 172.31.32.27:6443 --token 39g7sx.v589tv38nxhus74k --discovery-token-ca-cert-hash sha256:1db5d45337803e35e438cdcdd9ff77449fef3272381ee43784626f19c873d356
- Run on the master node:
kubeadm token create --print-join-command
- Copy the join command above and paste it on ALL worker nodes.
- Then, we get nodes ready, run on master:
kubectl get nodes
-
After restarting Master Node, it could be possible that the IP of master node is updated. Your K8s cluster API's IP is still old IP of the node. So you should configure the K8s cluster with new IP.
-
You cannot reach API when using kubectl commands:
- If you installed the docker for the docker registry, you can remove the exited containers:
sudo docker rm $(sudo docker ps -a -f status=exited -q)
sudo kubeadm reset
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- After kubeadm reset, if there is an error that shows the some of the ports still using, please use following command to kill process, then run kubeadm init:
sudo netstat -lnp | grep <PortNumber>
sudo kill <PID>
- It shows which command should be used to join cluster:
sudo kubeadm join 172.31.40.125:6443 --token 07vo3z.q2n2qz6bd07ipdnf \
--discovery-token-ca-cert-hash sha256:46c7dcb092ca091e71ab39bd542e73b90b3f7bdf0c486202b857a678cd9879ba
- Network Configuration with new IP:
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
sudo kubeadm reset
sudo kubeadm join 172.31.40.125:6443 --token 07vo3z.q2n2qz6bd07ipdnf \
--discovery-token-ca-cert-hash sha256:46c7dcb092ca091e71ab39bd542e73b90b3f7bdf0c486202b857a678cd9879ba
-
On Master Node:
-
Worker1 is now joined the cluster.
kubectl get nodes
- Run commands on Master Node to remove specific worker node:
kubectl get nodes
kubectl drain worker2
kubectl delete node worker2
- Run on the specific deleted node (worker2)
sudo kubeadm reset
5. Installing Docker on Existing Cluster & Starting of Running Local Registry for Storing Local Image
- Run commands on Master Node to install docker on Master node:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo docker run hello-world
Goto for more information: https://docs.docker.com/engine/install/ubuntu/
- Copy and run on all nodes to change Docker's Cgroup:
cd /etc/docker
sudo touch daemon.json
sudo nano daemon.json
# in the file, paste:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
sudo systemctl restart docker
sudo docker image ls
kubectl get nodes
- If your cluster is behind the proxy, configure PROXY settings of Docker (ref: add docker proxy: https://docs.docker.com/config/daemon/systemd/). Copy and run on all nodes:
sudo mkdir -p /etc/systemd/system/docker.service.d
cd /etc/systemd/system/docker.service.d/
sudo touch http-proxy.conf
sudo nano http-proxy.conf
# copy and paste in the file:
[Service]
Environment="HTTP_PROXY=http://<ProxyIP>:3128"
Environment="HTTPS_PROXY=http://<ProxyIP>:3128"
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl show --property=Environment docker
sudo docker run hello-world
- Use docker command without sudo:
sudo groupadd docker
sudo usermod -aG docker [non-root user]
# logout and login to enable it
- Run on Master to pull registry:
sudo docker image pull registry
- Run container using 'Registry' image: (-p: port binding [hostPort]:[containerPort], -d: detach mode (running background), -e: change environment variables status)
sudo docker container run -d -p 5000:5000 --restart always --name localregistry -e REGISTRY_STORAGE_DELETE_ENABLED=true registry
- Run registry container with binding mount (-v) and without getting error 500 (REGISTRY_VALIDATION_DISABLED=true):
sudo docker run -d -p 5000:5000 --restart=always --name registry -v /home/docker_registry:/var/lib/registry -e REGISTRY_STORAGE_DELETE_ENABLED=true -e REGISTRY_VALIDATION_DISABLED=true -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 registry
- Open with browser or run curl command:
curl http://127.0.0.1:5000/v2/_catalog
- In this scenario, docker local registry already runs on the Master node (see Section 5)
- First add insecure-registry into /etc/docker/daemon.js on the ALL Nodes:
sudo nano /etc/docker/daemon.json
# copy insecure-registries and paste it
{
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries":["192.168.219.64:5000"]
}
sudo systemctl restart docker.service
- Pull image from DockerHub, label with new tag and push the local registry on master node:
sudo docker image pull nginx:latest
ifconfig # to get master IP
sudo docker image tag nginx:latest 192.168.219.64:5000/nginx:latest
sudo docker image push 192.168.219.64:5000/nginx:latest
curl http://192.168.219.64:5000/v2/_catalog
sudo docker image pull 192.168.219.64:5000/nginx:latest
- Create docker config and get authentication username and pass in base64 coded:
sudo docker login # this creates /root/.docker/config
sudo cat /root/.docker/config.json | base64 -w0 # copy the base64 encoded key
- Create my-secret.yaml and paste the base64 encoded key:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
- Create secret. Kubelet uses this secret to pull image:
kubectl create -f my-secret.yaml && kubectl get secrets
- Create nginx_pod.yaml. Image name shows where the image is pulled from. In addition, "imagePullSecrets" should be defined, which secret should be used for pulling image for local docker registry.
apiVersion: v1
kind: Pod
metadata:
name: my-private-pod
spec:
containers:
- name: private
image: 192.168.219.64:5000/nginx:latest
imagePullSecrets:
- name: registrypullsecret
- On the ALL Nodes, registry IP and the port should be defined:
sudo nano /etc/containerd/config.toml # if containerd is using as runtime. If this was Docker, on /etc/docker/daemon.js add insecure-registries like master
# copy and paste (our IP: 192.168.219.64, change it with your IP):
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.219.64:5000"]
endpoint = ["http://192.168.219.64:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."192.168.219.64:5000".tls]
insecure_skip_verify = true
# restart containerd.service
sudo systemctl restart containerd.service
- If registry IP and the port is not defined, you will get this error: "http: server gave HTTP response to HTTPS client.
- If pod's status is ImagePullBackOff (Error), it can be inspected with describe command:
kubectl describe pods my-private-pod
- On Master:
kubectl apply -f nginx_pod.yaml
kubectl get pods -o wide
-
If it is required NFS Server, you can create NFS Server
- if you have Windows 2019 Server: https://youtu.be/_x3vg25i7GQ
- if you have Ubuntu: https://rudimartinsen.com/2022/01/05/nginx-nfs-kubernetes/
-
Run on ALL Nodes to reach NFS Server:
sudo apt install nfs-common
sudo apt install cifs-utils
sudo mkdir /data # create /data directory under root and mount it to NFS
sudo mount -t nfs <NFSServerIP>:/share /data/ # /share directory is created while creating NFS server
sudo chmod 777 /data # give permissions to reach mounted shared area
- https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
- https://github.com/aytitech/k8sfundamentals/tree/main/setup
- https://multipass.run/
- https://computingforgeeks.com/join-new-kubernetes-worker-node-to-existing-cluster/
- https://docs.docker.com/engine/install/ubuntu/
- https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
- https://stackoverflow.com/questions/32726923/pulling-images-from-private-registry-in-kubernetes
- https://stackoverflow.com/questions/65681045/adding-insecure-registry-in-containerd