-
Notifications
You must be signed in to change notification settings - Fork 0
Cooby Cloud Extras
- Odoo Deployment on K8s - XOE Odoo Operator
- Odoo Dev Pipeline - XOE Dockery-Odoo
- Cluster Backup Architecture - S3 and custom storage
Provided you got everything working in the previous step (HTTPS works and LetsEncrypt got automatically setup for your traefik dashboard) you can continue on.
First we’ll be making a registry.yaml
file with our custom values and enable Traefik for our ingress:
replicaCount: 1
ingress:
enabled: true
# Used to create an Ingress record.
hosts:
- registry.svc.bluescripts.net
annotations:
kubernetes.io/ingress.class: traefik
persistence:
accessMode: 'ReadWriteOnce'
enabled: true
size: 10Gi
# storageClass: '-'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
secrets:
haSharedSecret: ""
htpasswd: "YOUR_DOCKER_USERNAME:GENERATE_YOUR_OWN_HTPASSWD_FOR_HERE"
And putting it all together:
$ sudo helm install -f registry.yaml --name registry stable/docker-registry
Provided all that worked, you should now be able to push and pull images and login to your registry at registry.sub.yourdomain.com
There are several ways you can set up docker auth (like ServiceAccounts
) or ImagePullSecrets
- We'll show the latter.
Take your docker config that should look something like this:
{
"auths": {
"registry.sub.yourdomain.com": {
"auth": "BASE64 ENCODED user:pass"
}
}
}
And base64 encode that whole file/string. Make it all one line and then create a registry-creds.yaml
file:
apiVersion: v1
kind: Secret
metadata:
name: regcred
namespace: your_app_namespace
data:
.dockerconfigjson: BASE64_ENCODED_CREDENTIALS
type: kubernetes.io/dockerconfigjson
Create your app namespace: kubectl create namespace your_app_namespace
and apply it.
kubectl apply -f registry-creds.yaml
You can now delete this file (or encrypt it with GPG, etc) - just don’t commit it anywhere. Base64 encoding a string won’t protect your credentials.
You would then specify it in your helm delpoyment.yaml
like:
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
imagePullSecrets:
- name: regcred
We generally make a deployments
folder then do a helm create app_name
in there. You’ll want to edit the values.yaml
file to match your docker image names and vars.
We’ll need to edit the templates/ingress.yaml file and make sure you have a Traefik annotation:
annotations:
kubernetes.io/ingress.class: traefik
And finally here is an example deployment.yaml
that has a few extra things from the default:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
imagePullSecrets:
- name: regcred
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: {{ template "fullname" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
initialDelaySeconds: 5
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 10 }}
On line 14-15 we’re specifying our registry credentials we created in the previous step.
Assuming a replica count >= 2, Lines 16-22 are telling Kubernetes to schedule the pods on different worker nodes. This will prevent both web servers (for instance) from being put on the same node in case one of them crashes.
Lines 29-41 are going to depend on your app - if your server is slow to start up these values may not make sense and can cause your app to constantly go into a Running/ Error state and getting its containers reaped by the liveness checks.
And provided you just have configuration changes to try out (container is already built and in a registry), you can iterate locally:
$ sudo helm upgrade your_app_name . -i --namespace your_app_name --wait --debug
Here is a sample .gitlab-ci.yaml
that you can use for deploying - this is building a small go binary for ifcfg.net:
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
IMAGE: registry.sub.yourdomain.com/project/web
services:
- docker:dind
build:
image: docker:latest
stage: build
script:
- docker build -t $IMAGE:$CI_COMMIT_SHA .
- docker tag $IMAGE:$CI_COMMIT_SHA $IMAGE:latest
- docker push $IMAGE:$CI_COMMIT_SHA
- docker push $IMAGE:latest
deploy:
image: ubuntu
stage: deploy
before_script:
- apt-get update && apt-get install -y curl
- curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
script:
- cd deployment/ifcfg
- KUBECONFIG=<(echo "$KUBECONFIG") helm upgrade ifcfg . -i --namespace ifcfg --wait --debug --set "image.tag=$CI_COMMIT_SHA"
Lines 16-19 are where we tag latest and our commited SHA and push both to our registry we made in the previous steps.
Line 26 is installing Helm.
Line 29 is doing a few things. First thing to note is we have our ~/.kube/config
file set as a environment variable in Gitlab. The <(echo)...
stuff is a little shell trick that makes it look like a file on the file system (that way we don’t have to write it out in a separate step). upgrade -i
says to upgrade our app, and if it doesn’t exist yet, to install it. The last important bit is image.tag=$CI_COMMIT_SHA
- this helps you setup for deploying tagged releases instead of always deploying the latest from your repository.
That's it, you should now have an automated build pipeline setup for a project on your working Kubernetes cluster.
$ sudo apt-get update
Install Docker Version for Rancher
$ curl https://releases.rancher.com/install-docker/17.03.sh | sh
$ sudo usermod -aG docker ubuntu
Set Domain Name - Setup domain at registrar (A record pointing to Rancher instance)
Open Software/Hardware Firewall Inbound TCP Ports
80, 443
Install Rancher 2.1.0 /w Acme-DNS and Letsencrypt. After obtaining certs, run the Docker command below:
- Use the -v flag and provide the path to your certificates to mount them in your container.
- Replace
<CERT_DIRECTORY>
with the directory path to your certificate file. - Replace
<FULL_CHAIN.pem>
and<PRIVATE_KEY.pem>
with your certificate names.
- Replace
- Use the
--no-cacerts
as argument to the container to disable the default CA certificate generated by Rancher.
$ sudo docker run -d --restart=unless-stopped \
-p 80:80 -p 443:443 \
-v /host/rancher:/var/lib/rancher \
-v /<CERT_DIRECTORY>/<FULL_CHAIN.pem>:/etc/rancher/ssl/cert.pem \
-v /<CERT_DIRECTORY>/<PRIVATE_KEY.pem>:/etc/rancher/ssl/key.pem \
rancher/rancher:v2.1.0 --no-cacerts
Install kubectl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo touch /etc/apt/sources.list.d/kubernetes.list
$ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update
$ sudo apt-get install -y kubectl
$ mkdir ~/.kube
Copy Master Cluster Config to Rancher Server
On master: $ ssh-keygen -t rsa
On rancher: append Master ~/.ssh/id_rsa.pub to Rancher ~/.ssh/authorized_keys file
On master: $ cd ~/.kube
On master: $ scp config ubuntu@<RancherIP>:/home/ubuntu/.kube
On rancher: $ kubectl cluster-info
Import Cluster into Rancher
- Cluster -> Add Cluster -> Import
- Add sandbox name and membership -> Next
- Run first generated command on Rancher:
$ kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user kubernetes-admin
(user comes from ~/.kube/config file)
- Run second generated command(s) on Master:
$ kubectl apply -f https://rancher.cooby.online/v3/import/l8rjv7pc75mrpmvlngdfcqh69wqk67......yaml
- Wait for cluster to become active in Rancher