Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add NATS Streaming Operator Helm Chart #39

Merged
merged 1 commit into from
Jan 21, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions helm/nats-streaming-operator/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: nats-streaming-operator
version: 1.0.0
appVersion: 0.2.3
description: Operator for managing NATS Streaming clusters running on Kubernetes.
keywords:
- nats
- streaming
- delivery
- ratelimit
- replay
- restart
home: https://github.com/nats-io/nats-streaming-operator
sources:
- https://github.com/nats-io/nats-streaming-operator
maintainers:
- name: starefossen
email: hans@starefossen.com
icon: https://bitnami.com/assets/stacks/nats/img/nats-stack-110x117.png
135 changes: 135 additions & 0 deletions helm/nats-streaming-operator/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# NATS Streaming Operator Helm Chart

NATS Streaming is an extremely performant, lightweight reliable streaming
platform built on NATS.

## TL;DR

```bash
$ helm install .
```

## Introduction

NATS Streaming Operator manages [NATS
Streaming](https://github.com/nats-io/nats-streaming-server) clusters atop
[Kubernetes](http://kubernetes.io), automating their creation and
administration. With the NATS Operator you can benefits from the flexibility
brought by the Kubernetes operator pattern. It means less juggling between
manifests and a few handy features like automatic configuration reload.

If you want to manage NATS entirely by yourself and have more control over your
NATS cluster, you can always use the classic [NATS
Streaming](https://github.com/nats-io/nats-streaming-operator/releases)
deployment.

## Prerequisites

- Kubernetes 1.8+

## Installing the Chart

To install the chart with the release name `my-release`:

```bash
$ helm install --name my-release .
```

The command deploys NATS and the NATS Operator on the Kubernetes cluster in the
default configuration. The [configuration](#configuration) section lists the
parameters that can be configured during installation.

> **Tip**: List all releases using `helm list`

## Uninstalling the Chart

To uninstall/delete the `my-release` deployment:

```bash
$ helm delete my-release
```

The command removes all the Kubernetes components associated with the chart and
deletes the release.

## Configuration

The following table lists the configurable parameters of the NATS chart and
their default values.

| Parameter | Description | Default |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | ----------------------------------------------- |
| `rbacEnabled` | Switch to enable/disable RBAC for this chart | `true` |
| `cluster.enabled` | | |
| `cluster.name` | | |
| `cluster.version` | | |
| `cluster.size` | | |
| `cluster.natsSvc` | | |
| `cluster.config.debug` | | |
| `cluster.config.trace` | | |
| `cluster.config.raftLogging` | | |
| `cluster.metrics.enabled` | | |
| `cluster.metrics.image` | | |
| `cluster.metrics.version` | | |
| `image.registry` | NATS Operator image registry | `docker.io` |
| `image.repository` | NATS Operator image name | `connecteverything/nats-operator` |
| `image.tag` | NATS Operator image tag | `0.4.3-v1alpha2` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify image pull secrets | `nil` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `nodeSelector` | Node labels for pod assignment | `nil` |
| `tolerations` | Toleration labels for pod assignment | `nil` |
| `schedulerName` | Name of an alternate scheduler | `nil` |
| `antiAffinity` | Anti-affinity for pod assignment (values: soft or hard) | `soft` |
| `podAnnotations` | Annotations to be added to pods | `{}` |
| `podLabels` | Additional labels to be added to pods | `{}` |
| `updateStrategy` | Replicaset Update strategy | `OnDelete` |
| `rollingUpdatePartition` | Partition for Rolling Update strategy | `nil` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `livenessProbe.enabled` | Enable liveness probe | `true` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
| `livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `readinessProbe.enabled` | Enable readiness probe | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `5` |
| `readinessProbe.periodSeconds` | How often to perform the probe | `10` |
| `readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `clusterScoped` | Enable cluster scoped installation (read carefully the warnings) | `false` |

### Example

Here is an example of how to setup a NATS cluster with client authentication.

Specify each parameter using the `--set key=value[,key=value]` argument to `helm
install`.

```bash
$ helm install \
--name my-release \
--set cluster.natsSvc=my-nats-cluster \
```

You can consider editing the default values.yaml as it is easier to manage:

```yaml
...
cluster:
size: 2
natsSvc: my-nats-cluster
...
```

Alternatively, a YAML file that specifies the values for the parameters can be
provided while installing the chart. For example,

> **Tip**: You can use the default [values.yaml](values.yaml)

```bash
$ helm install --name my-release -f values.yaml .
```
25 changes: 25 additions & 0 deletions helm/nats-streaming-operator/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
** Please be patient while the chart is being deployed **
{{- if .Values.clusterScoped }}

** WARNING ! **: You've installed a cluster-scoped NATS Operator. Make sure that there are no other deployments of NATS Operator in the Kubernetes cluster.
{{- if not (eq .Release.Namespace "nats-io") }}

** WARNING ! **: The namespace must be "nats-io" however you used "{{ .Release.Namespace }}" !
{{- end }}
{{- end}}

NATS can be accessed via port 4222 on the following DNS name from within your cluster:

nats-cluster.{{ .Release.Namespace }}.svc.cluster.local

NATS monitoring service can be accessed via port 8222 on the following DNS name from within your cluster:

nats-cluster-mgmt.{{ .Release.Namespace }}.svc.cluster.local

To access the Monitoring svc from outside the cluster, follow the steps below:

1. Get the hostname indicated on the Ingress Rule and associate it to your cluster external IP:

kubectl port-forward svc/nats-cluster-mgmt 8222

2. Open a browser and access the NATS monitoring browsing to the Monitoring URL
21 changes: 21 additions & 0 deletions helm/nats-streaming-operator/templates/_helpers.tpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{{/* vim: set filetype=mustache: */}}

{{/*
Expand the name of the chart.
*/}}
{{- define "nats.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "nats.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{- define "nats.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
18 changes: 18 additions & 0 deletions helm/nats-streaming-operator/templates/crd.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: natsstreamingclusters.streaming.nats.io
annotations:
"helm.sh/hook": "crd-install"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: streaming.nats.io
names:
kind: NatsStreamingCluster
listKind: NatsStreamingClusterList
plural: natsstreamingclusters
singular: natsstreamingcluster
shortNames: ["stanclusters", "stancluster"]
scope: Namespaced
version: v1alpha1
125 changes: 125 additions & 0 deletions helm/nats-streaming-operator/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "nats.fullname" . }}
labels:
app: {{ template "nats.name" . }}
chart: {{ template "nats.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ template "nats.name" . }}
release: {{ .Release.Name }}
strategy:
type: {{ .Values.updateStrategy }}
{{- if ne .Values.updateStrategy "RollingUpdate" }}
rollingUpdate: null
{{- end }}
template:
metadata:
labels:
app: {{ template "nats.name" . }}
release: {{ .Release.Name }}
{{- with .Values.podAnnotations }}
annotations:
{{ toYaml . | indent 8 }}
{{- end }}
spec:
updateStrategy:
type: {{ .Values.updateStrategy }}
{{- if .Values.rollingUpdatePartition }}
rollingUpdate:
partition: {{ .Values.rollingUpdatePartition }}
{{- end }}
{{- if .Values.rbacEnabled }}
serviceAccountName: nats-streaming-operator
{{- end }}
containers:
- name: nats-streaming-operator
image: {{ .Values.image.registry }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.pullPolicy }}
env:
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: readyz
containerPort: 8080
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nats-streaming-operator does not have 8080 port into it's pod. So I had a problem with this livenessprobe.

httpGet:
path: /readyz
port: readyz
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
{{- end }}
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /readyz
port: readyz
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10}}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ .Values.nodeSelector }}
{{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ .Values.tolerations }}
{{- end }}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName}}"
{{- end }}
{{- if eq .Values.antiAffinity "hard" }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: {{ .Release.Name | quote }}
{{- else if eq .Values.antiAffinity "soft" }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: "{{ template "nats.name" . }}"
release: "{{ .Release.Name }}"
{{- end }}
{{- if .Values.pullSecrets }}
imagePullSecrets:
{{ .Values.pullSecrets}}
{{- end }}
36 changes: 36 additions & 0 deletions helm/nats-streaming-operator/templates/natsstreaming.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
{{- if .Values.cluster.enabled }}
apiVersion: streaming.nats.io/v1alpha1
kind: NatsStreamingCluster
metadata:
name: {{ .Values.cluster.name }}
spec:
size: {{ .Values.cluster.size }}
image: nats-streaming:{{ .Values.cluster.version }}
natsSvc: {{ .Values.cluster.natsSvc }}

config:
debug: {{ .Values.cluster.config.debug }}
trace: {{ .Values.cluster.config.trace }}
raftLogging: {{ .Values.cluster.config.raftLogging }}

template:
{{- if .Values.cluster.metrics.enabled }}
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "metrics"
{{- end }}
spec:
containers:
- name: {{ .Values.cluster.name }}

{{- if .Values.cluster.metrics.enabled }}
- name: metrics
image: {{ .Values.cluster.metrics.image }}:{{ .Values.cluster.metrics.version }}
args: ["-varz", "-channelz", "-serverz", "-DV", "http://localhost:8222"]
ports:
- name: metrics
containerPort: 7777
protocol: TCP
{{- end }}
{{- end }}
Loading