diff --git a/helm/nats-streaming-operator/Chart.yaml b/helm/nats-streaming-operator/Chart.yaml new file mode 100644 index 0000000..238fa8b --- /dev/null +++ b/helm/nats-streaming-operator/Chart.yaml @@ -0,0 +1,18 @@ +name: nats-streaming-operator +version: 1.0.0 +appVersion: 0.2.3 +description: Operator for managing NATS Streaming clusters running on Kubernetes. +keywords: +- nats +- streaming +- delivery +- ratelimit +- replay +- restart +home: https://github.com/nats-io/nats-streaming-operator +sources: +- https://github.com/nats-io/nats-streaming-operator +maintainers: +- name: starefossen + email: hans@starefossen.com +icon: https://bitnami.com/assets/stacks/nats/img/nats-stack-110x117.png diff --git a/helm/nats-streaming-operator/README.md b/helm/nats-streaming-operator/README.md new file mode 100644 index 0000000..c5e6116 --- /dev/null +++ b/helm/nats-streaming-operator/README.md @@ -0,0 +1,135 @@ +# NATS Streaming Operator Helm Chart + +NATS Streaming is an extremely performant, lightweight reliable streaming +platform built on NATS. + +## TL;DR + +```bash +$ helm install . +``` + +## Introduction + +NATS Streaming Operator manages [NATS +Streaming](https://github.com/nats-io/nats-streaming-server) clusters atop +[Kubernetes](http://kubernetes.io), automating their creation and +administration. With the NATS Operator you can benefits from the flexibility +brought by the Kubernetes operator pattern. It means less juggling between +manifests and a few handy features like automatic configuration reload. + +If you want to manage NATS entirely by yourself and have more control over your +NATS cluster, you can always use the classic [NATS +Streaming](https://github.com/nats-io/nats-streaming-operator/releases) +deployment. + +## Prerequisites + +- Kubernetes 1.8+ + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```bash +$ helm install --name my-release . +``` + +The command deploys NATS and the NATS Operator on the Kubernetes cluster in the +default configuration. The [configuration](#configuration) section lists the +parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```bash +$ helm delete my-release +``` + +The command removes all the Kubernetes components associated with the chart and +deletes the release. + +## Configuration + +The following table lists the configurable parameters of the NATS chart and +their default values. + +| Parameter | Description | Default | +| ------------------------------------ | -------------------------------------------------------------------------------------------- | ----------------------------------------------- | +| `rbacEnabled` | Switch to enable/disable RBAC for this chart | `true` | +| `cluster.enabled` | | | +| `cluster.name` | | | +| `cluster.version` | | | +| `cluster.size` | | | +| `cluster.natsSvc` | | | +| `cluster.config.debug` | | | +| `cluster.config.trace` | | | +| `cluster.config.raftLogging` | | | +| `cluster.metrics.enabled` | | | +| `cluster.metrics.image` | | | +| `cluster.metrics.version` | | | +| `image.registry` | NATS Operator image registry | `docker.io` | +| `image.repository` | NATS Operator image name | `connecteverything/nats-operator` | +| `image.tag` | NATS Operator image tag | `0.4.3-v1alpha2` | +| `image.pullPolicy` | Image pull policy | `Always` | +| `image.pullSecrets` | Specify image pull secrets | `nil` | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.fsGroup` | Group ID for the container | `1001` | +| `securityContext.runAsUser` | User ID for the container | `1001` | +| `nodeSelector` | Node labels for pod assignment | `nil` | +| `tolerations` | Toleration labels for pod assignment | `nil` | +| `schedulerName` | Name of an alternate scheduler | `nil` | +| `antiAffinity` | Anti-affinity for pod assignment (values: soft or hard) | `soft` | +| `podAnnotations` | Annotations to be added to pods | `{}` | +| `podLabels` | Additional labels to be added to pods | `{}` | +| `updateStrategy` | Replicaset Update strategy | `OnDelete` | +| `rollingUpdatePartition` | Partition for Rolling Update strategy | `nil` | +| `resources` | CPU/Memory resource requests/limits | `{}` | +| `livenessProbe.enabled` | Enable liveness probe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` | +| `livenessProbe.periodSeconds` | How often to perform the probe | `10` | +| `livenessProbe.timeoutSeconds` | When the probe times out | `5` | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` | +| `readinessProbe.enabled` | Enable readiness probe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `5` | +| `readinessProbe.periodSeconds` | How often to perform the probe | `10` | +| `readinessProbe.timeoutSeconds` | When the probe times out | `5` | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` | +| `clusterScoped` | Enable cluster scoped installation (read carefully the warnings) | `false` | + +### Example + +Here is an example of how to setup a NATS cluster with client authentication. + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm +install`. + +```bash +$ helm install \ + --name my-release \ + --set cluster.natsSvc=my-nats-cluster \ +``` + +You can consider editing the default values.yaml as it is easier to manage: + +```yaml +... +cluster: + size: 2 + natsSvc: my-nats-cluster +... +``` + +Alternatively, a YAML file that specifies the values for the parameters can be +provided while installing the chart. For example, + +> **Tip**: You can use the default [values.yaml](values.yaml) + +```bash +$ helm install --name my-release -f values.yaml . +``` diff --git a/helm/nats-streaming-operator/templates/NOTES.txt b/helm/nats-streaming-operator/templates/NOTES.txt new file mode 100644 index 0000000..0a53d8d --- /dev/null +++ b/helm/nats-streaming-operator/templates/NOTES.txt @@ -0,0 +1,25 @@ +** Please be patient while the chart is being deployed ** +{{- if .Values.clusterScoped }} + +** WARNING ! **: You've installed a cluster-scoped NATS Operator. Make sure that there are no other deployments of NATS Operator in the Kubernetes cluster. +{{- if not (eq .Release.Namespace "nats-io") }} + +** WARNING ! **: The namespace must be "nats-io" however you used "{{ .Release.Namespace }}" ! +{{- end }} +{{- end}} + +NATS can be accessed via port 4222 on the following DNS name from within your cluster: + + nats-cluster.{{ .Release.Namespace }}.svc.cluster.local + +NATS monitoring service can be accessed via port 8222 on the following DNS name from within your cluster: + + nats-cluster-mgmt.{{ .Release.Namespace }}.svc.cluster.local + +To access the Monitoring svc from outside the cluster, follow the steps below: + +1. Get the hostname indicated on the Ingress Rule and associate it to your cluster external IP: + + kubectl port-forward svc/nats-cluster-mgmt 8222 + +2. Open a browser and access the NATS monitoring browsing to the Monitoring URL diff --git a/helm/nats-streaming-operator/templates/_helpers.tpl b/helm/nats-streaming-operator/templates/_helpers.tpl new file mode 100644 index 0000000..493469a --- /dev/null +++ b/helm/nats-streaming-operator/templates/_helpers.tpl @@ -0,0 +1,21 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "nats.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "nats.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- define "nats.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} diff --git a/helm/nats-streaming-operator/templates/crd.yaml b/helm/nats-streaming-operator/templates/crd.yaml new file mode 100644 index 0000000..477ff20 --- /dev/null +++ b/helm/nats-streaming-operator/templates/crd.yaml @@ -0,0 +1,18 @@ +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition +metadata: + name: natsstreamingclusters.streaming.nats.io + annotations: + "helm.sh/hook": "crd-install" + "helm.sh/hook-delete-policy": "before-hook-creation" +spec: + group: streaming.nats.io + names: + kind: NatsStreamingCluster + listKind: NatsStreamingClusterList + plural: natsstreamingclusters + singular: natsstreamingcluster + shortNames: ["stanclusters", "stancluster"] + scope: Namespaced + version: v1alpha1 diff --git a/helm/nats-streaming-operator/templates/deployment.yaml b/helm/nats-streaming-operator/templates/deployment.yaml new file mode 100644 index 0000000..634a065 --- /dev/null +++ b/helm/nats-streaming-operator/templates/deployment.yaml @@ -0,0 +1,125 @@ +--- +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: {{ template "nats.fullname" . }} + labels: + app: {{ template "nats.name" . }} + chart: {{ template "nats.chart" . }} + release: {{ .Release.Name }} + heritage: {{ .Release.Service }} +{{- with .Values.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: + replicas: {{ .Values.replicas }} + selector: + matchLabels: + app: {{ template "nats.name" . }} + release: {{ .Release.Name }} + strategy: + type: {{ .Values.updateStrategy }} + {{- if ne .Values.updateStrategy "RollingUpdate" }} + rollingUpdate: null + {{- end }} + template: + metadata: + labels: + app: {{ template "nats.name" . }} + release: {{ .Release.Name }} +{{- with .Values.podAnnotations }} + annotations: +{{ toYaml . | indent 8 }} +{{- end }} + spec: + updateStrategy: + type: {{ .Values.updateStrategy }} + {{- if .Values.rollingUpdatePartition }} + rollingUpdate: + partition: {{ .Values.rollingUpdatePartition }} + {{- end }} + {{- if .Values.rbacEnabled }} + serviceAccountName: nats-streaming-operator + {{- end }} + containers: + - name: nats-streaming-operator + image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }} + imagePullPolicy: {{ .Values.pullPolicy }} + env: + - name: MY_POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: MY_POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + ports: + - name: readyz + containerPort: 8080 + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: /readyz + port: readyz + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: /readyz + port: readyz + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- end }} + resources: +{{ toYaml .Values.resources | indent 10}} + {{- if .Values.securityContext.enabled }} + securityContext: + fsGroup: {{ .Values.securityContext.fsGroup }} + runAsUser: {{ .Values.securityContext.runAsUser }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: + {{ .Values.nodeSelector }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: + {{ .Values.tolerations }} + {{- end }} + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName}}" + {{- end }} + {{- if eq .Values.antiAffinity "hard" }} + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: "kubernetes.io/hostname" + labelSelector: + matchLabels: + app: "{{ template "nats.name" . }}" + release: {{ .Release.Name | quote }} + {{- else if eq .Values.antiAffinity "soft" }} + affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + app: "{{ template "nats.name" . }}" + release: "{{ .Release.Name }}" + {{- end }} + {{- if .Values.pullSecrets }} + imagePullSecrets: + {{ .Values.pullSecrets}} + {{- end }} diff --git a/helm/nats-streaming-operator/templates/natsstreaming.yaml b/helm/nats-streaming-operator/templates/natsstreaming.yaml new file mode 100644 index 0000000..07da1c8 --- /dev/null +++ b/helm/nats-streaming-operator/templates/natsstreaming.yaml @@ -0,0 +1,36 @@ +{{- if .Values.cluster.enabled }} +apiVersion: streaming.nats.io/v1alpha1 +kind: NatsStreamingCluster +metadata: + name: {{ .Values.cluster.name }} +spec: + size: {{ .Values.cluster.size }} + image: nats-streaming:{{ .Values.cluster.version }} + natsSvc: {{ .Values.cluster.natsSvc }} + + config: + debug: {{ .Values.cluster.config.debug }} + trace: {{ .Values.cluster.config.trace }} + raftLogging: {{ .Values.cluster.config.raftLogging }} + + template: + {{- if .Values.cluster.metrics.enabled }} + metadata: + annotations: + prometheus.io/scrape: "true" + prometheus.io/port: "metrics" + {{- end }} + spec: + containers: + - name: {{ .Values.cluster.name }} + + {{- if .Values.cluster.metrics.enabled }} + - name: metrics + image: {{ .Values.cluster.metrics.image }}:{{ .Values.cluster.metrics.version }} + args: ["-varz", "-channelz", "-serverz", "-DV", "http://localhost:8222"] + ports: + - name: metrics + containerPort: 7777 + protocol: TCP + {{- end }} +{{- end }} diff --git a/helm/nats-streaming-operator/templates/rbac.yaml b/helm/nats-streaming-operator/templates/rbac.yaml new file mode 100644 index 0000000..86ff97e --- /dev/null +++ b/helm/nats-streaming-operator/templates/rbac.yaml @@ -0,0 +1,58 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: nats-streaming-operator +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: nats-streaming-operator-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: nats-streaming-operator +subjects: +- kind: ServiceAccount + name: nats-streaming-operator + namespace: default +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: nats-streaming-operator +rules: +# Allow creating CRDs +- apiGroups: + - apiextensions.k8s.io + resources: + - customresourcedefinitions + verbs: ["*"] + +# Allow all actions on NatsClusters +- apiGroups: + - nats.io + resources: + - natsclusters + - natsserviceroles + verbs: ["*"] + +# Allow all actions on NatsStreamingClusters +- apiGroups: + - streaming.nats.io + resources: + - natsstreamingclusters + verbs: ["*"] + +# Allow actions on basic Kubernetes objects +- apiGroups: [""] + resources: + - configmaps + - secrets + - pods + - services + - serviceaccounts + - serviceaccounts/token + - endpoints + - events + verbs: ["*"] diff --git a/helm/nats-streaming-operator/values.yaml b/helm/nats-streaming-operator/values.yaml new file mode 100644 index 0000000..045c5aa --- /dev/null +++ b/helm/nats-streaming-operator/values.yaml @@ -0,0 +1,126 @@ +## Specify if RBAC authorization is enabled. +## ref: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +## +rbacEnabled: true + +cluster: + # Deploy a NATS Streaming Cluster with the operator + enabled: true + + # Name of the NATS Streaming Cluster + name: nats-streaming-cluster + + # Version of the NATS Streaming Cluster to deploy + version: 0.12.2 + + # Number of nodes in cluster + size: 3 + + # NATS Cluster service name + natsSvc: "nats-cluster" + + config: + debug: true + trace: true + raftLogging: true + + metrics: + enabled: true + image: synadia/prometheus-nats-exporter + version: "0.2.0" + +image: + registry: docker.io + repository: synadia/nats-streaming-operator + tag: v0.2.2-v1alpha1 + + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: Always + + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistrKeySecretName + +## NATS Pod Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +securityContext: + enabled: true + fsGroup: 1001 + runAsUser: 1001 + +## NATS Node selector and tolerations for pod assignment +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations +## +# nodeSelector: {} +# tolerations: [] + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pods anti-affinity +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +## Possible values: soft, hard +antiAffinity: soft + +## Pod annotations +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ +## +podAnnotations: {} + +## Additional pod labels +## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ +## +podLabels: {} + +## Update strategy, can be set to RollingUpdate or OnDelete by default. +## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets +updateStrategy: OnDelete + +## Partition update strategy +## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions +# rollingUpdatePartition: + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: {} +# limits: +# cpu: 500m +# memory: 512Mi +# requests: +# cpu: 100m +# memory: 256Mi + +## Configure extra options for liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) +livenessProbe: + enabled: true + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 +readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Operator scope +## NOTE: If true +## * Make sure that no othe NATS operator is running in the cluster +## * The Release namespace must be "nats-io" +clusterScoped: false