This scenario shows how K8s PVC and PV work on minikube
- On Minikube, we do not have to reach NFS Server. So we simulate NFS Server with Docker Container.
docker volume create nfsvol
docker network create --driver=bridge --subnet=10.255.255.0/24 --ip-range=10.255.255.0/24 --gateway=10.255.255.10 nfsnet
docker run -dit --privileged --restart unless-stopped -e SHARED_DIRECTORY=/data -v nfsvol:/data --network nfsnet -p 2049:2049 --name nfssrv ozgurozturknet/nfs:latest
- Now our simulated server enabled.
- Copy and save (below) as file on your PC (pv.yaml).
- File: https://github.com/omerbsezer/Fast-Kubernetes/blob/main/labs/persistentvolume/pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysqlpv
labels:
app: mysql # labelled PV with "mysql"
spec:
capacity:
storage: 5Gi # 5Gibibyte = power of 2; 5GB= power of 10
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
nfs:
path: / # binds the path on the NFS Server
server: 10.255.255.10 # IP of NFS Server
- Create PV object on our cluster:
- Copy and save (below) as file on your PC (pvc.yaml).
- File: https://github.com/omerbsezer/Fast-Kubernetes/blob/main/labs/persistentvolume/pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysqlclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
storageClassName: ""
selector:
matchLabels:
app: mysql # chose/select "mysql" PV that is defined above.
- Create PVC object on our cluster. After creation, PVC's status shows to bind to PV ("Bound"):
- Copy and save (below) as file on your PC (deploy.yaml).
- File: https://github.com/omerbsezer/Fast-Kubernetes/blob/main/labs/persistentvolume/deploy.yaml
apiVersion: v1 # Create Secret object for password
kind: Secret
metadata:
name: mysqlsecret
type: Opaque
stringData:
password: P@ssw0rd!
---
apiVersion: apps/v1
kind: Deployment # Deployment
metadata:
name: mysqldeployment
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql # select deployment container (template > metadata > labels)
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
volumeMounts: # VolumeMounts on path and volume name
- mountPath: "/var/lib/mysql"
name: mysqlvolume # which volume to select (volumes > name)
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom: # get mysql password from secrets
secretKeyRef:
name: mysqlsecret
key: password
volumes:
- name: mysqlvolume # name of Volume
persistentVolumeClaim:
claimName: mysqlclaim # chose/select "mysqlclaim" PVC that is defined above.
- Run deployment on our cluster:
- Watching deployment status:
- See the details of pod (mounts and volumes):
- Enter into the pod and see the path that the volume is mounted ("kubectl exec -it -- bash"):
- If the new node is added into the cluster and this running pod is stopped running on the main minikube node, the pod will start on the another node.
- With this scenario, we can see the followings:
- Deployment always run pod on the cluster.
- The pod which is created on the new node still connects the persistent volume (there is not any loss for volume)
- How assigning taint on the node (key:=value:NoExecute, if NoExecute is not tolerated by pod, pod is deleted on the node)
- New pod is created on the new node (2nd node)
- Second pod also is connected to the same volume again.
- Enter into the 2nd pod and see the path that the volume is mounted ("kubectl exec -it -- bash"). When you see the files at the same path on the 2nd pod, volume files are same:
- Delete minikube, docker container, volume, network: