An operator to watch ingresses/routes and create liveness alerts for your apps/microservices in Uptime checkers.
We want to monitor ingresses in a kubernetes cluster and routes in openshift cluster via any uptime checker but the problem is having to manually check for new ingresses or routes / removed ingresses or routes and add them to the checker or remove them.
This operator will continuously watch ingresses/routes based on defined EndpointMonitor
custom resource, and
automatically add / remove monitors in any of the uptime checkers. With the help of this solution, you can keep a check
on your services and see whether they're up and running and live, without worrying about manually registering them on
the Uptime checker.
Currently we support the following monitors:
- UptimeRobot (Additional Config)
- Pingdom (Additional Config) (Not fully tested)
- StatusCake (Additional Config)
- Uptime (Additional Config)
- Updown (Additional Config)
- Application Insights (Additional Config)
- gcloud (Additional Config)
Configure the uptime checker configuration in the config.yaml
based on your uptime provider. Add create a secret
imc-config
that holds config.yaml
key:
kind: Secret
apiVersion: v1
metadata:
name: imc-config
data:
config.yaml: >-
<BASE64_ENCODED_CONFIG.YAML>
type: Opaque
Following are the available options that you can use to customize the controller:
Key | Description |
---|---|
providers | An array of uptime providers that you want to add to your controller |
enableMonitorDeletion | A safeguard flag that is used to enable or disable monitor deletion on ingress deletion (Useful for prod environments where you don't want to remove monitor on ingress deletion) |
resyncPeriod | Resync period in seconds, allows to re-sync periodically the monitors with the Routes. Defaults to 0 (= disabled) |
creationDelay | CreationDelay is a duration string to add a delay before creating new monitor (e.g., to allow DNS to catch up first) |
monitorNameTemplate | Template for monitor name eg, {{.Namespace}}-{{.Name}} |
- Replace
BASE64_ENCODED_CONFIG.YAML
with your config.yaml file that is encoded in base64. - For detailed guide for the configuration refer to Docs and go through configuration guidelines for your uptime provider.
- For sample
config.yaml
files refer to Sample Configs. - Name of secret can be changed by setting environment variable
CONFIG_SECRET_NAME
.
EndpointMonitor
resource can be used to manage monitors on static urls or route/ingress references.
- Specifying url:
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: stakater
spec:
forceHttps: true
url: https://stakater.com
- Specifying route reference:
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: frontend
spec:
forceHttps: true
urlFrom:
routeRef:
name: frontend
- Specifying ingress reference:
apiVersion: endpointmonitor.stakater.com/v1alpha1
kind: EndpointMonitor
metadata:
name: frontend
spec:
forceHttps: true
urlFrom:
ingressRef:
name: frontend
NOTE: For provider specific additional configuration refer to Docs and go through configuration guidelines for your uptime provider.
The following quickstart let's you set up Ingress Monitor Controller to register uptime monitors for endpoints:
If you have configured helm on your cluster, you can deploy IngressMonitorController via helm using below mentioned commands. For details on chart, see IMC Helm Chart
# Install CRDs
kubectl apply -f https://raw.githubusercontent.com/stakater/IngressMonitorController/master/charts/ingressmonitorcontroller/crds/endpointmonitor.stakater.com_endpointmonitors.yaml
# Install chart
helm repo add stakater https://stakater.github.io/stakater-charts
helm repo update
helm install stakater/ingressmonitorcontroller
- Clone this repository
$ git clone git@github.com:stakater/IngressMonitorController.git
- Deploy dependencies(crds):
$ make deploy
Key | Default | Description |
---|---|---|
WATCH_NAMESPACE | Namespace in which operator is deployed | Use comma separated list of namespaces or leave the field empty to watch all namespaces(cluster scope) |
CONFIG_SECRET_NAME | imc-config | Name of secret that holds the configuration |
REQUEUE_TIME | 300 seconds | Integer value to specify number of seconds after which the resource should be reconciled again |
You can find more detailed documentation for configuration, extension, and support for other Uptime checkers etc. here
If you'd like to contribute any fixes or enhancements, please refer to the documentation here
File a GitHub issue, or send us an email.
Join and talk to us on the #tools-ingressmonitor channel for discussing the Ingress Monitor Controller
- Latest image of kube-rbac-proxy fails on openshift with permission issues. To resolve use
registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0
instead of kube-rbac-proxy. This issue can be tracked here.
IMC has now been converted to an Operator and we have stopped support from our side for the controller based implementation , although support from community for the controller is still appreciated. Using Operator is recommended and existing users can follow Migration To Operator for migrating to Operator. Although, Controller based implementation is maintained at release-v1 instead.
Apache2 © Stakater
The IngressMonitorController
is maintained by Stakater. Like it? Please let us know at hello@stakater.com
See our other projects or contact us in case of professional services and queries on hello@stakater.com
The Google Cloud test infrastructure is sponsored by JOSHMARTIN.
Stakater Team and the Open Source community! 🏆