The operator creates, updates, deletes uptimerobot monitors for a particular ingress resource. It's designed to use friendly_name
parameter of a monitor and/or alert contact for unique identification.
The operator uses uptimerobot-tooling to handle api requests.
The operator will delete the monitor it creates when the ingress resource is deleted.
Environment Variables Supported:
In addition to the environments supplied on the tooling mentioned above, the operator has the following configurations.
Variable | Description | Default |
---|---|---|
DOMAIN_PREFIX |
The domain name to use when specifying the annotations. | onaio.github.io |
With the DOMAIN_PREFIX
as onaio.github.io
the configurations will be supplied as follows:
Example 1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: https-minimal-ingress
annotations:
onaio.github.io/uptimerobot-monitor: "true"
onaio.github.io/uptimerobot-monitor-type: "HTTP"
onaio.github.io/uptimerobot-monitor-friendly_name: "tester"
onaio.github.io/uptimerobot-monitor-alert_contacts: "tester opsgenie"
onaio.github.io/uptimerobot-monitor-interval: "60"
onaio.github.io/uptimerobot-monitor-timeout: "30"
# To use more parameters append valid uptimerobot monitor api parameter from https://uptimerobot.com/api/ to the prefix `onaio.github.io/uptimerobot-monitor-`
spec:
rules:
- host: test-domain.localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
The operator reads configurations of the monitor from the annotation on the ingress resource.
The first annotation entry onaio.github.io/uptimerobot-monitor
enables the ingress resource to be evaluated by the operator. The other annotations supply the parameters of the monitor. The naming convention is:
onaio.github.io/uptimerobot-monitor-<parameter>
.
To get more parameters refer to the tooling documentation and uptimerobot api documentation.
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for
testing, or run against a remote cluster.
Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever
cluster kubectl cluster-info
shows).
Ensure you have supplied the environment variables here config/manager/manager.yaml.
To deploy the operator you will need the following manifests:
- serviceaccount
- clusterrole
- clusterrolebinding
- deployment
Create a yaml file and paste the yaml snippet below and update configurations to your preferences.
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: serviceaccount
app.kubernetes.io/instance: controller-manager
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: uptimerobot-operator
app.kubernetes.io/part-of: uptimerobot-operator
name: uptimerobot-operator
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: uptimerobot-operator
rules:
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: clusterrolebinding
app.kubernetes.io/instance: manager-rolebinding
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: uptimerobot-operator
app.kubernetes.io/part-of: uptimerobot-operator
name: uptimerobot-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: uptimerobot-operator
subjects:
- kind: ServiceAccount
name: uptimerobot-operator
namespace: default # update this to preferred namespace
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uptimerobot-operator
labels:
control-plane: controller-manager
app.kubernetes.io/name: deployment
app.kubernetes.io/instance: controller-manager
app.kubernetes.io/component: manager
app.kubernetes.io/created-by: uptimerobot-operator
app.kubernetes.io/part-of: uptimerobot-operator
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: manager
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
env:
- name: UPTIME_ROBOT_API_KEY
value: "<api-key>"
# - name: MONITOR_RESOLVE_ALERT_CONTACTS_BY_FRIENDLY_NAME
# value: "true"
# - name: MONITOR_ALERT_CONTACTS_DELIMITER
# value: "-"
# - name: MONITOR_ALERT_CONTACTS_ATTRIB_DELIMITER
# value: "_"
image: onaio/uptimerobot-operator:v0.1.0
name: manager
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
# TODO(user): Configure the resources accordingly based on the project requirements.
# More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
serviceAccountName: uptimerobot-operator
terminationGracePeriodSeconds: 10
kubectl apply -f uptimerobot-operator.yaml
Use the latest tag from dockerhub
- Build and push your image to the location specified by
IMG
:
make docker-build docker-push IMG=onaio/uptimerobot-operator:v0.1.0
- Deploy the controller to the cluster with the image specified by
IMG
:
make deploy IMG=onaio/uptimerobot-operator:v0.1.0
To delete the CRDs from the cluster:
make uninstall
UnDeploy the controller to the cluster:
make undeploy
// TODO(user): Add detailed information on how you would like others to contribute to this project
This project aims to follow the Kubernetes Operator pattern
It uses Controllers which provides a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTE: You can also run this in one step by running: make install run
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make manifests
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
Copyright 2022.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.