Helm chart is designed to support FeatureHub’s most scalable deployment option (REST + Streaming real-time feature updates). Read more about it here
These instructions are designed to get you up and running on a local kubernetes running on KinD. They will be fleshed out over time as we have these examples running on Google’s GKE/Autopilot, AWS’s EKS/Fargate, and Azure’s AKS.
Note
|
This set of Helm charts is designed to be run the local kinD cluster and be fully operable by anyone following these instructions. Please note, you will likely need to modify certain values in the chart to suit your production needs. |
The helm chart on the kind cluster installs with a default of Postgres and NATs. There are deploy scripts for using both, but a simple
$ helm upgrade -i featurehub featurehub -n featurehub
will work as long as the other kinD dependencies are installed. Below are the setup instructions for ensuring that kinD is installed and configured correctly (it needs an ingress running on port 80) before installing the charts.
We assume KinD here rather than Docker for Kubernetes or Minikube or k3s. If you are familiar with those, you should be able to use them too and you can skip this step.
If you don’t have KinD installed, follow these instructions KinD
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
helm upgrade --install featurehub featurehub
(If you receive an error 'Error: Internal error occurred: failed calling webhook …', try deleting the ingress-nginx-admission
resource kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
and try again)
The FeatureHub services will initially 'crash' because they expect the Postgres and NATs services to already be available. Its ok, they will recover fairly quickly and start up.
To update it with config changes, just
helm upgrade featurehub featurehub
It should show something like this:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE featurehub-dacha-7d65db8c48-r59l6 0/1 Pending 0 7s featurehub-dacha-7d65db8c48-zqb8s 0/1 Running 0 8s featurehub-edge-86b6b5cf46-4cqvr 0/1 ContainerCreating 0 7s featurehub-edge-86b6b5cf46-n8cm5 0/1 ContainerCreating 0 8s featurehub-management-repository-567b8fbffb-7vcbf 0/1 ContainerCreating 0 8s featurehub-nats-0 0/3 ContainerCreating 0 8s featurehub-nats-1 2/3 Running 0 7s featurehub-nats-2 0/3 Pending 0 7s featurehub-nats-box-78587c4478-nkbzt 0/1 ContainerCreating 0 8s featurehub-postgresql-0 0/1 Running 0 8s
By default, the helm chart will create a global ingress - as defined by the values - which means all traffic routing into your namespace will go through that ingress and be routed as per its rules. This means default traffic to MR and /features to Edge - Dacha never receives edge traffic.
All services generally have a separate monitoring and server port, as per the docs - MR is 8701 for monitoring, 8085 for the server. This is to ensure your liveness and prometheus traffic endpoints aren’t exposed to the world.
If your services aren’t coming up and you have modified the chart, its likely because the port configuration is wrong.
You will need to find a pod (unless you have enabled Prometheus on the MR, in which case you get a service endpoint for it). For example:
$ kubectl get pods
and then port-forward
The management port is exposed on a service:
$ kubectl port-forward featurehub-management-repository-bf8b79d4d-2pkdl 8701
A curl to check it’s health:
curl localhost:8701 featurehub is here%
If there is something wrong with the health check of a service, then the logs should be printing out exactly what is wrong, and so should the endpoint.
-
support Google Pub/Sub configuration (including local dev)
-
clearly delineate application params vs env var based deployment
-
add better documentation and links
-
annotate for all fields so helmdocs pics up docs
-
add changelog
The backend services port for MR was not open so Dacha2 was failing to connect on a restart of service.
FeatureHub v1.6.0 with Dacha2, the new caching layer. Dacha2 will start up faster, but will initially have no data in it, so it will be slower (on the first request) to respond The addition is that it needs to know where MR is, because when it comes across an environment or service key it doesn’t understand, it will ask MR (failures get cached so MR doesn’t get spammed with invalid requests). Dacha1 is being supported over the new few major versions but is slowly being phased out.
The changes to the chart include turning off Dacha1 support and turning on Dacha2 and telling Dacha2 where the MR is.
We have also upgraded the embedded NATS chart to version 2.9.1 of NATS, and the Postgres chart to version 15 of Postgres. The new schema permissions model has required a minor tweak of the startup script here.
Please ensure that you install and run helm-docs before issuing a PR because the workflow confirms it is up to date. It is required for publishing to the Helm Artifact Registry.
FeatureHub is operating under Apache 2.0 license. Please refer to the full license here.