Skip to content

Swiss knife to create a kubernetes cluster faster and easier on plateform aws, vmware, openstack, cloudstack, multipass, lxd with kube engine k3,rke2,kubeadm, microk8s

License

Notifications You must be signed in to change notification settings

Fred78290/autoscaled-masterkube-multipass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Licence

Introduction

This project contains everthing to create an single plane or HA autoscaling kubernetes cluster with following supported engine on following plateform.

Supported kubernetes engine are

  • k3s
  • rke2
  • kubeadm
  • microk8s

Supported plateform are

  • AWS
  • CloudStack
  • Multipass
  • OpenStack
  • VMWare Fusion
  • VMWare Workstation
  • vSphere
  • LXD

The cluster support also autoscaling by using kubernetes-cloud-autoscaler and grpc autoscaler or vanilla autoscaler

Prerequistes

Ensure that you have sudo right and the following tools installed

Linux MacOS
envsubst envsubst
helm helm
kubectl kubectl
jq jq
yq yq
cfssl cfssl
gnu-getopt
gsed
gbase64

You must also install depending target plateform

Multipass OpenStack CloudStack LXD VSphere VMWare Desktop Aws
packer packer packer packer govc vmware-vdiskmanager aws
packer-plugin-qemu packer-plugin-openstack packer-plugin-cloudstack packer-plugin-lxd - ovftool
multipass openstack-client lxc cmk - kubernetes-desktop-autoscaler-utility
kubernetes-desktop-autoscaler-utility - - - - -
qemu-img

Some of these tools could be installed via homebrew

Create the masterkube

The simply way to create the masterkube is to run ./bin/create-masterkube.sh --plateform=<aws|cloudstack|desktop|multipass|openstack|lxd> --kube-engine=<k3s|rke2|kubeadm|microk8s>

Some needed file are located in:

Name Description
bin Essentials scripts to build the master kubernetes node
bin/plateform Essentials scripts for target plateform
etc/ssl/<choosen domain> Your CERT for https. Autosigned will be generated if empty cert.pem, chain.pem, fullchain.pem, privkey.pem
images Generated qemu-image for multipass plateform
templates Templates files to deploy pod & service
tools Bootstrap files installed in nodes

The first thing done by this script is to create an Ubuntu-24.04 image with choosen kubernetes engine installed with needed components. The image will be stored on the targeted plateform (template VM, AMI, qemu-img, ....)

Next step will be to launch a VM on choosen plateform and create a master node.

Private DNS registration

For easier integration the tool allows to register cluster in private Route53 zone or in an existing DNS server RFC2136 compliant. It's possible also to deploy an internal Bind9 server and use it for private domain registration. If the plateform is openstack and designate is available with your private domain, it wil be used

Public DNS registration

If you decide to expose the cluster on internet, the tool have the ability to register the cluster in public Route53 zone or in GoDaddy resolver. If the plateform is openstack and designate is available with your public domain, it wil be used.

Installation per plateform

Install on AWS

Install on CloudStack

Install on Multipass

Install on OpenStack

Install on VMWare Workstation or VMWare Fusion

Install on vSphere

Install on LXD

During the process many files created will be located at

Name Description
config/<generated nodegroup name>/cluster Essentials file generated by kubernetes engine needed to join node
config/<generated nodegroup name>/config Configuration file generated during the build process
config/<generated nodegroup name>/config/deployment Files generated by deployment process

The nodegroup name have this pattern NODEGROUP_NAME=${PLATEFORM}-${DEPLOY_MODE}-${KUBERNETES_DISTRO}

The process install also following kubernetes components

  • cert manager
  • external dns
  • kubernetes dashboard and metrics scraper
  • kubeapps
  • rancher
  • nginx ingress controller

The kubernetes dashboard is reachable at the URL https://dashboard-\<generated-groupname>.<your-domain>/

To connect to the dashboard, copy paste the token from file config/<generated nodegroup name>cluster/dashboard-token

The kubeapps UI is reachable at the URL https://kubeapps-\<generated-groupname>.<your-domain>/

The Rancher UI is reachable at the URL https://rancher-\<generated-groupname>.<your-domain>/

It also deployed a replicaset helloworld to demonstrate the ability of autoscaling

Raise autoscaling

To scale up or down the cluster, just play with kubectl scale

To scale fresh masterkube kubectl scale --replicas=2 deploy/helloworld -n kube-public

Delete master kube and worker nodes

To delete the master kube and associated worker nodes, just run the command delete-masterkube.sh

Common command line arguments

Variable definitions are located in the file common.sh.

Variables could be overrided in file located ./bin/plateform/<plateform>/override.sh

Parameter Description Default
--help | -h Display usage
--plateform=[vsphere|aws|desktop|multipass] Where to deploy cluster ${PLATEFORM}
--verbose | -v Verbose NO
--trace | -x Trace execution NO
--resume | -r Allow to resume interrupted creation of cluster kubernetes
--delete | -d Delete cluster and exit
--distribution Ubuntu distribution to use ${UBUNTU_DISTRIBUTION}
--create-image-only Create image only NO
--upgrade Upgrade existing cluster to upper version of kubernetes
Flags to set some location informations
--configuration-location=<path> Specify where configuration will be stored ${CONFIGURATION_LOCATION}
--ssl-location=<path> Specify where the etc/ssl dir is stored ${SSL_LOCATION}
--defs=<path> Specify the hidden plateform variables ./bin/plateform/${PLATEFORM}/vars.def
Design the kubernetes cluster
--autoscale-machine=<value> Override machine type used for auto scaling ${AUTOSCALE_MACHINE}
--cni-plugin=<value> Override CNI plugin ${CNI_PLUGIN}
--cni-version=<value> Override CNI plugin version ${CNI_VERSION}
--container-runtime=[containerd|cri-o] Specify which OCI runtime to use ${CONTAINER_ENGINE}
--control-plane-machine=<value> Override machine type used for control plane ${CONTROL_PLANE_MACHINE}
--ha-cluster | -c Allow to create an HA cluster ${HA_CLUSTER}
--kube-engine=[kubeadm|k3s|rke2|microk8s] Which kubernetes distribution to use: kubeadm, k3s, rke2 ${KUBERNETES_DISTRO}
--kube-version | -k=<value> Override the kubernetes version ${KUBERNETES_VERSION}
--max-pods=<value> Specify the max pods per created VM ${MAX_PODS}
--nginx-machine=<value> Override machine type used for nginx as ELB ${NGINX_MACHINE}
--node-group=<value> Override the node group name ${NODEGROUP_NAME}
--ssh-private-key=<path> Override ssh key is used ${SSH_PRIVATE_KEY}
--transport=[tcp|unix] Override the transport to be used between autoscaler and kubernetes-cloud-autoscaler unix
--worker-node-machine=<value> Override machine type used for worker nodes ${WORKER_NODE_MACHINE}
--worker-nodes=<value> Specify the number of worker nodes created in HA cluster ${WORKERNODES}
--create-external-etcd | -e Create an external HA etcd cluster ${EXTERNAL_ETCD}
--use-cloud-init Use cloud-init to configure autoscaled nodes instead off ssh ${USE_CLOUDINIT_TO_CONFIGURE}
Design domain
--public-domain=<value> Specify the public domain to use ${PUBLIC_DOMAIN_NAME}
--private-domain=<value> Specify the private domain to use ${PRIVATE_DOMAIN_NAME}
--dashboard-hostname=<value> Specify the hostname for kubernetes dashboard ${DASHBOARD_HOSTNAME}
--external-dns-provider=[none|aws|godaddy|rfc2136|designate] Specify external dns provider. ${EXTERNAL_DNS_PROVIDER}
Cert Manager
--cert-email=<value> Specify the mail for lets encrypt ${CERT_EMAIL}
--use-zerossl Specify cert-manager to use zerossl ${USE_ZEROSSL}
--use-self-signed-ca Specify if use self-signed CA ${USE_CERT_SELFSIGNED}
--zerossl-eab-kid=<value> Specify zerossl eab kid ${CERT_ZEROSSL_EAB_KID}
--zerossl-eab-hmac-secret=<value> Specify zerossl eab hmac secret ${CERT_ZEROSSL_EAB_HMAC_SECRET}
GoDaddy
--godaddy-key=<value> Specify godaddy api key ${CERT_GODADDY_API_KEY:=GODADDY_API_KEY}
--godaddy-secret=<value> Specify godaddy api secret ${CERT_GODADDY_API_SECRET:=GODADDY_API_SECRET}
Route53
--route53-zone-id=<value> Specify the route53 zone id ${AWS_ROUTE53_PUBLIC_ZONE_ID}
--route53-access-key=<value> Specify the route53 aws access key ${AWS_ROUTE53_ACCESSKEY}
--route53-secret-key=<value> Specify the route53 aws secret key ${AWS_ROUTE53_SECRETKEY}
Flags for autoscaler
--grpc-provider=[grpc|externalgrpc] autoscaler flag ${GRPC_PROVIDER}
--max-nodes-total=<value> autoscaler flag ${MAXTOTALNODES}
--cores-total=<value> autoscaler flag ${CORESTOTAL}
--memory-total=<value> autoscaler flag ${MEMORYTOTAL}
--max-autoprovisioned-node-group-count=<value> autoscaler flag ${MAXAUTOPROVISIONNEDNODEGROUPCOUNT}
--scale-down-enabled=<value> autoscaler flag ${SCALEDOWNENABLED}
--scale-down-utilization-threshold=<value> autoscaler flag ${SCALEDOWNUTILIZATIONTHRESHOLD}
--scale-down-gpu-utilization-threshold=<value> autoscaler flag ${SCALEDOWNGPUUTILIZATIONTHRESHOLD}
--scale-down-delay-after-add=<value> autoscaler flag ${SCALEDOWNDELAYAFTERADD}
--scale-down-delay-after-delete=<value> autoscaler flag ${SCALEDOWNDELAYAFTERDELETE}
--scale-down-delay-after-failure=<value> autoscaler flag ${SCALEDOWNDELAYAFTERFAILURE}
--scale-down-unneeded-time=<value> autoscaler flag ${SCALEDOWNUNEEDEDTIME}
--scale-down-unready-time=<value> autoscaler flag ${SCALEDOWNUNREADYTIME}
--max-node-provision-time=<value> autoscaler flag ${MAXNODEPROVISIONTIME}
--unremovable-node-recheck-timeout=<value> autoscaler flag ${UNREMOVABLENODERECHECKTIMEOUT}

Raise autoscaling

To scale up or down the cluster, just play with kubectl scale

To scale fresh masterkube kubectl scale --replicas=2 deploy/helloworld -n kube-public

Delete master kube and worker nodes

To delete the master kube and associated worker nodes, just run the command ./bin/delete-masterkube.sh --plateform=<plateform> --kube-engine=<engine>