This example illustrates how the placement of scheduled pods can be influenced. For these examples, we assume that you have Minikube installed and running as described in here.
The simplest way to influence the scheduling process is to use a nodeSelector
.
Apply our simple example random-generator application with a node selector:
kubectl create -f https://k8spatterns.io/AutomatedPlacement/node-selector.yml
You will notice that this Pod does not get scheduled because the nodeSelector
can’t find any node with the label disktype=ssd
:
kubectl describe pod node-selector
.... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 8s (x2 over 8s) default-scheduler 0/1 nodes are available: 1 node(s) didnt match node selector.
Let’s change this:
kubectl label node minikube disktype=ssd
kubectl get pods
NAME READY STATUS RESTARTS AGE random-generator 1/1 Running 0 65s
Let’s now use Node affinity rules for scheduling our Pod:
kubectl create -f https://k8spatterns.io/AutomatedPlacement/node-affinity.yml
Again, our Pod will only schedule as no node fulfills the affinity rules. We can change this with
kubectl label node minikube numberCores=4
Does the Pod start up now? What if you choose two instead of 4 for the number of cores?
To test Pod affinity, we need to install a Pod to connect our Pod. We are trying to create both Pods with
kubectl create -f https://k8spatterns.io/AutomatedPlacement/pod-affinity.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE confidential-high 1/1 Running 0 22s pod-affinity 0/1 Pending 0 22s
"confidential-high" is a placeholder pod with a label matched by our "pod-affinity" Pod. However, our node still needs to get the proper topology key. That can be changed with
kubectl label --overwrite node minikube security-zone=high
kubectl get pods
NAME READY STATUS RESTARTS AGE confidential-high 1/1 Running 0 9m39s pod-affinity 1/1 Running 0 9m39s
For testing taints and tolerations, we first have to taint our Minikube node so that, by default, no Pods are scheduled on it:
kubectl taint nodes minikube node-role.kubernetes.io/master="":NoSchedule
You can check that this taint works by reapplying the previous pod-affinity.yml
example and seeing that the confidential-high
Pod is not scheduled.
kubectl delete -f https://k8spatterns.io/AutomatedPlacement/pod-affinity.yml
kubectl create -f https://k8spatterns.io/AutomatedPlacement/pod-affinity.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE confidential-high 0/1 Pending 0 2s pod-affinity 0/1 Pending 0 2s
But our Pod in tolerations.yml
can be scheduled as it tolerates this new taint on Minikube:
kubectl create -f https://k8spatterns.io/AutomatedPlacement/tolerations.yml
kubectl get pods
NAME READY STATUS RESTARTS AGE confidential-high 0/1 Pending 0 2m51s pod-affinity 0/1 Pending 0 2m51s tolerations 1/1 Running 0 4s