Home Project Stack
The stack is deployed using Kubernetes cluster enabled using microk8s (https://microk8s.io/docs). microk8s is installed using snap package manger. Package is provided by Canonical (publisher of Ubuntu).
- Resources: quad-core ARMx64 processor with 8GB RAM
- Kernel: GNU/Linux 6.8.0-1015-raspi aarch64
- OS: Ubuntu 24.04.1
As of now it is deployed on 2 nodes cluster.
- Alok Singh
- Home Stack
- Table of contents
- Prerequisites
- Deployment of home-stack Kubernetes Stack
- Note: use one of this token for Kubernetes Dashboard login
- Kubernetes Metrics Server
- Create ConfigMap
- Create Secrets
- Create Network policy
- MySQL Service - Pod/Deployment/Service
- Home Network Troubleshoot - Pod/Statefulset/Service
- Home API Service - Pod/Deployment/Service
- Home Email Service - Pod/Deployment/Service
- Home Auth Service - Pod/Deployment/Service
- Home Analytics Service - Pod/Deployment/Service
- Home ETL Service - Pod/Statefulset/Service
- Home GIT Commit CronJob (retired)
- Dashboard Service - Pod/Deployment/Service
- Jaeger Service
- Delete Stack
- Ingress
- Horizon Autoscaling
- Miscellaneous commands
- Service Mesh - Istio
- Backup
- Network Monitoring
- Deployment Architecture
ssh alok@jgte "mkdir yaml"
scp yaml/namespace.yaml alok@jgte:yaml/
ssh alok@jgte "kubectl apply -f yaml/namespace.yaml"
So that cluster operation can be performed by running kubectl
remotely
scp yaml/home-user-rback-cluster-admin-user.yaml alok@jgte:yaml/
ssh alok@jgte "kubectl apply -f yaml/home-user-rback-cluster-admin-user.yaml"
at the end - to remove
kubectl taint nodes jgte nodeType=master:NoSchedule-
at the end - to remove
kubectl taint nodes khbr nodeType=worker:NoSchedule-
kubectl apply -f yaml/kubernetes-dashboard.yaml
Note: the dashboard service type is LoadBalancer and host IP (static) is assigned. The Dashboard can be access directly - https://jgte:8443/
kubectl delete -f yaml/kubernetes-dashboard.yaml
kubectl get all --namespace kubernetes-dashboard
kubectl get svc --namespace kubernetes-dashboard
kubectl apply -f yaml/kubernetes-dashboard-rback-dashboard-admin-user.yaml
kubectl create token k8s-dashboard-admin-user --duration=999999h -n kubernetes-dashboard
kubectl apply -f yaml/kubernetes-dashboard-rback-cluster-admin-user.yaml
kubectl create token k8s-dashboard-cluster-admin-user --duration=999999h -n kubernetes-dashboard
Note: the above doesnt have workloads get role
kubectl apply -f yaml/metrix-server.yaml
kubectl delete -f yaml/metrix-server.yaml
kubectl get deployment metrics-server -n kube-system
kubectl top nodes
kubectl apply -f yaml/config-map.yaml
Note: add/update below configs from backup ~/k8s
- home-api-cofig (home-stack) 2. iot-secure-keystore-password 3. iot-secure-truststore-password
- home-auth-cofig (home-stack) 5. application-security-jwt-secret 6. oauth-google-client-id 7. logging-level-com-alok
- home-etl-cofig (home-stack) 9. git-bearer-token
kubectl apply -f yaml/secrets.yaml
kubectl apply -f yaml/networkpolicy.yaml
ssh alok@jgte mkdir -p /home/alok/data/mysql
kubectl apply --validate=true --dry-run=client -f yaml/mysql-service.yaml
kubectl apply -f yaml/mysql-service.yaml
kubectl delete -f yaml/mysql-service.yaml
kubectl exec -it pod/mysql-0 --namespace home-stack-db -- mysql -u root -p<<password>>
CREATE DATABASE `home-stack`;
kubectl exec -it pod/mysql-0 --namespace home-stack-db -- mysql -u root -p home-stack
kubectl logs pod/mysql-0 --namespace home-stack-db
mysql -u root -p home-stack --host 127.0.0.1 --port 32306
Note:
Run liquibase to create batch tables and add application users and roles
Follow the link to configure sqldeveloper on Mac to connect to MySQL server remotely
kubectl apply --validate=true --dry-run=client -f yaml/home-nw-tshoot.yaml
kubectl apply -f yaml/home-nw-tshoot.yaml --namespace=home-stack
kubectl delete -f yaml/home-nw-tshoot.yaml --namespace=home-stack
kubectl exec -it pod/home-nw-tshoot-deployment-0 --namespace home-stack -- zsh
kubectl apply --validate=true --dry-run=client -f yaml/home-api-service.yaml
kubectl apply -f yaml/home-api-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-api-service.yaml --namespace=home-stack
kubectl exec -it pod/home-api-deployment-0 --namespace home-stack -- bash
kubectl exec -it pod/home-api-deployment-0 --namespace home-stack -- tail -f /opt/logs/application.log
kubectl logs pod/home-api-deployment-0 --namespace home-stack
kubectl rollout restart statefulset.apps/home-api-deployment -n home-stack
kubectl apply --validate=true --dry-run=client -f yaml/home-email-service.yaml
kubectl apply -f yaml/home-email-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-email-service.yaml --namespace=home-stack
kubectl apply --validate=true --dry-run=client -f yaml/home-auth-service.yaml
kubectl apply -f yaml/home-auth-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-auth-service.yaml --namespace=home-stack
kubectl exec -it pod/home-auth-deployment-0 --namespace home-stack -- bash
read instance
kubectl exec -it pod/home-auth-deployment-$instance --namespace home-stack -- tail -f /opt/logs/application.log
kubectl logs pod/home-auth-deployment-$instance --namespace home-stack
kubectl rollout restart statefulset.apps/home-api-deployment -n home-stack
kubectl apply --validate=true --dry-run=client -f yaml/home-analytics-service.yaml
kubectl apply -f yaml/home-analytics-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-analytics-service.yaml --namespace=home-stack
read instance
kubectl logs pod/home-analytics-deployment-$instance --namespace home-stack
kubectl exec -it pod/home-analytics-deployment-$instance --namespace home-stack -- bash
kubectl apply --validate=true --dry-run=client -f yaml/home-search-service.yaml
kubectl apply -f yaml/home-search-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-search-service.yaml --namespace=home-stack
read instance
kubectl logs pod/home-search-deployment-$instance --namespace home-stack
kubectl exec -it pod/home-search-deployment-$instance --namespace home-stack -- bash
kubectl apply --validate=true --dry-run=client -f yaml/home-etl-service.yaml
kubectl apply -f yaml/home-etl-service.yaml --namespace=home-stack
kubectl delete -f yaml/home-etl-service.yaml --namespace=home-stack
kubectl exec -it pod/home-etl-deployment-0 --namespace home-stack -- bash
kubectl exec -it pod/home-etl-deployment-0 --namespace home-stack -- tail -f /opt/logs/application.log
kubectl logs pod/home-etl-deployment-0 --namespace home-stack
kubectl rollout restart statefulset.apps/home-api-deployment -n home-stack
kubectl apply --validate=true --dry-run=client -f yaml/git-commit-cronjob.yaml
kubectl apply -f yaml/git-commit-cronjob.yaml --namespace=home-stack
kubectl delete -f yaml/git-commit-cronjob.yaml --namespace=home-stack
kubectl apply --validate=true --dry-run=client -f yaml/dashboard-service.yaml
kubectl apply -f yaml/dashboard-service.yaml
kubectl delete -f yaml/dashboard-service.yaml
kubectl exec -it deployment.apps/dashboard-deployment --namespace home-stack-dmz -- /bin/sh
kubectl logs deployment.apps/dashboard-deployment --namespace home-stack-dmz
kubectl apply --validate=true --dry-run=client -f yaml/jaeger-all-in-one-template.yml
kubectl apply -f yaml/jaeger-all-in-one-template.yml --namespace=home-stack
kubectl delete -f yaml/jaeger-all-in-one-template.yml --namespace=home-stack
kubectl delete namespace home-stack-dmz
kubectl delete namespace home-stack
kubectl delete namespace home-stack-db
kubectl apply -f yaml/ingress.yaml
kubectl delete -f yaml/ingress.yaml
kubectl get ingress -n home-stack-dmz
kubectl describe ingress -n home-stack-dmz
kubectl describe ingress ingress-home-jgte --namespace home-stack-dmz
kubectl get all --namespace ingress
kubectl describe daemonset.apps/nginx-ingress-microk8s-controller --namespace ingress
kubectl describe pod/nginx-ingress-microk8s-controller-8wmwc --namespace ingress
kubectl get all --namespace ingress
kubectl logs nginx-ingress-microk8s-controller-8wmwc --namespace ingress
kubectl apply --validate=true --dry-run=client -f yaml/home-hpa.yaml
kubectl apply -f yaml/home-hpa.yaml --namespace=home-stack
kubectl get hpa
kubectl describe hpa home-auth-hpa
kubectl describe hpa home-api-hpa
kubectl describe hpa home-analytics-hpa
kubectl autoscale deployment dashboard-deployment --min=2 --max=3 -n home-stack
kubectl get hpa --namespace home-stack
kubectl edit hpa dashboard-deployment --namespace home-stack
kubectl scale -n home-stack deployment dashboard-deployment --replicas=1
kubectl version --output=json
This gives details about nodes including images in local
kubectl get nodes -o yaml
kubectl describe nodes
kubectl get ResourceQuota
This gives cluster dump including all pods log
kubectl cluster-info dump > ~/k8s/cluster-dump.log
kubectl get all --all-namespaces
kubectl get svc --all-namespaces
kubectl describe svc dashboard-service --namespace home-stack-dmz
kubectl describe svc kubernetes-dashboard --namespace kubernetes-dashboard
kubectl logs pod/dashboard-deployment-65cf5b8858-7x8z8 --namespace home-stack
kubectl describe pod home-etl-deployment-0 --namespace=home-stack
kubectl top pods
kubectl top pod home-etl-deployment-0 --containers
kubectl get po -A -o wide
kubectl api-resources
kubectl explain --api-version="networking.k8s.io/v1" NetworkPolicy.spec
kubectl explain --api-version="networking.k8s.io/v1" NetworkPolicy.spec.ingress
kubectl explain --api-version="batch/v1beta1" cronjobs.spec
kubectl get crd
kubectl explain --api-version="apiregistration.k8s.io/v1" APIService
kubectl explain --api-version="apiextensions.k8s.io/v1" CustomResourceDefinition
kubectl cheat sheet - https://kubernetes.io/docs/reference/kubectl/cheatsheet/
To be explored - seems microk8s isteo addon not supported for ARMx64 architecture. Where the same is supported for minikube.
This is needed as some config items are directly updated in the cluster through Kubernetes Dashboard for security reason
kubectl get configmap --namespace=home-stack stmt-parser-cofig -o yaml > ~/k8s/stmt-parser-cofig.yaml
kubectl get configmap --namespace=home-stack home-etl-cofig -o yaml > ~/k8s/home-etl-cofig.yaml
kubectl get configmap --namespace=home-stack home-api-cofig -o yaml > ~/k8s/home-api-cofig.yaml
kubectl get configmap --namespace=home-stack home-auth-cofig -o yaml > ~/k8s/home-auth-cofig.yaml
kubectl get configmap --namespace=home-stack dashboard-cofig -o yaml > ~/k8s/dashboard-cofig.yaml
kubectl get configmap --namespace=home-stack home-common-cofig -o yaml > ~/k8s/home-common-cofig.yaml
kubectl get configmap --namespace=home-stack-dmz nginx-conf -o yaml > ~/k8s/nginx-conf.yaml
kubectl get configmap --namespace=home-stack home-email-cofig -o yaml > ~/k8s/home-email-cofig.yaml
This is needed as some secret items are directly updated in the cluster through Kubernetes Dashboard for security reason
kubectl get secrets --namespace=home-stack mysql-secrets -o yaml > ~/k8s/mysql-secrets.yaml
kubectl get secrets --namespace=home-stack-db mysql-secrets -o yaml > ~/k8s/mysql-secrets-db.yaml
kubeshark tap
Kubeshark dashboard is accessible using http://localhost:8899
kubeshark clean
Application | Description | Service Type | Deployment/StatefulSet/CronJob/DaemonSet | URL | Comments |
---|---|---|---|---|---|
Home ETL Service | ETL for bank statement and other sources | ClusterIP (Headless) | StatefulSet | /home/etl | NA |
Home Auth Service | Home AuthN and AuthZ service | ClusterIP | Deployment | /home/api | GraalVM based native Image |
Home API Service | API for Bank/Expense/Tax/Investment/etc... | ClusterIP | Deployment | /home/api | GraalVM based native Image |
Home Analytics Service | gRPC interface to categorize expense | ClusterIP | Deployment | /home/api | GraalVM based native Image |
Home Email Service | IMAP to read bank transactions and SMTP to send mail | ClusterIP | Deployment | /home/api | GraalVM based native Image |
Home Dashboard | ReactJS App on Nginx | NodePort | Deployment | http://jgte:30080 or https://jgte | - For multinode deployment Interface has to be changed to ClusterIP and put behind Ingress - externalTrafficPolicy: Local to disable SNATing |
Home GIT Cronjob | Cronjob to update GIT with uploaded statement (not in use) | None | CronJob | NA | NA |
Database | MySQL | NodePort | StatefulSet | jdbc:mysql://mysql:3306/home-stack | - NodePort because I want to access SQL from outside of the cluster |
Kubernetes Dashboard | LoadBalancer (static IP) | Deployment | https://jgte:8443/ | ||
Kubernetes Matrix | Generating resource utilization matrix | ClusterIP | Deployment | NA | |
Kubernetes Matrix Scraper | Matrix scrapper from pods | ClusterIP | Deployment | NA | |
Jaeger Dashboard | NodePort | Deployment | http://jgte:31686/ | ||
Ingress Controller | Nginx Ingress Controller | NodePort | DaemonSet | Port: 443 | API/ETL/Dashboard are behind Nginx but still we have Dashboard accessible directly (from mobile cant access host name - require local DNS server) |
graph LR
A[Write Code] --> B{Does it work?}
B -- Yes --> C[Great!]
B -- No --> D[Google]
D --> A