node-problem-detector aims to make various node problems visible to the upstream layers in the cluster management stack. It is a daemon that runs on each node, detects node problems and reports them to apiserver. node-problem-detector can either run as a DaemonSet or run standalone. Now it is running as a Kubernetes Addon enabled by default in the GKE cluster. It is also enabled by default in AKS as part of the AKS Linux Extension.
There are tons of node problems that could possibly affect the pods running on the node, such as:
- Infrastructure daemon issues: ntp service down;
- Hardware issues: Bad CPU, memory or disk;
- Kernel issues: Kernel deadlock, corrupted file system;
- Container runtime issues: Unresponsive runtime daemon;
- ...
Currently, these problems are invisible to the upstream layers in the cluster management stack, so Kubernetes will continue scheduling pods to the bad nodes.
To solve this problem, we introduced this new daemon node-problem-detector to collect node problems from various daemons and make them visible to the upstream layers. Once upstream layers have visibility to those problems, we can discuss the remedy system.
node-problem-detector uses Event
and NodeCondition
to report problems to
apiserver.
NodeCondition
: Permanent problem that makes the node unavailable for pods should be reported asNodeCondition
.Event
: Temporary problem that has limited impact on pod but is informative should be reported asEvent
.
A problem daemon is a sub-daemon of node-problem-detector. It monitors specific kinds of node problems and reports them to node-problem-detector.
A problem daemon could be:
- A tiny daemon designed for dedicated Kubernetes use-cases.
- An existing node health monitoring daemon integrated with node-problem-detector.
Currently, a problem daemon is running as a goroutine in the node-problem-detector binary. In the future, we'll separate node-problem-detector and problem daemons into different containers, and compose them with pod specification.
Each category of problem daemon can be disabled at compilation time by setting corresponding build tags. If they are disabled at compilation time, then all their build dependencies, global variables and background goroutines will be trimmed out of the compiled executable.
List of supported problem daemons types:
Problem Daemon Types | NodeCondition | Description | Configs | Disabling Build Tag |
---|---|---|---|---|
SystemLogMonitor | KernelDeadlock ReadonlyFilesystem FrequentKubeletRestart FrequentDockerRestart FrequentContainerdRestart | A system log monitor monitors system log and reports problems and metrics according to predefined rules. | filelog, kmsg, kernel abrt systemd | disable_system_log_monitor |
SystemStatsMonitor | None(Could be added in the future) | A system stats monitor for node-problem-detector to collect various health-related system stats as metrics. See the proposal here. | system-stats-monitor | disable_system_stats_monitor |
CustomPluginMonitor | On-demand(According to users configuration), existing example: NTPProblem | A custom plugin monitor for node-problem-detector to invoke and check various node problems with user-defined check scripts. See the proposal here. | example | disable_custom_plugin_monitor |
HealthChecker | KubeletUnhealthy ContainerRuntimeUnhealthy | A health checker for node-problem-detector to check kubelet and container runtime health. | kubelet docker containerd |
An exporter is a component of node-problem-detector. It reports node problems and/or metrics to certain backends. Some of them can be disabled at compile-time using a build tag. List of supported exporters:
Exporter | Description | Disabling Build Tag |
---|---|---|
Kubernetes exporter | Kubernetes exporter reports node problems to Kubernetes API server: temporary problems get reported as Events, and permanent problems get reported as Node Conditions. | |
Prometheus exporter | Prometheus exporter reports node problems and metrics locally as Prometheus metrics | |
Stackdriver exporter | Stackdriver exporter reports node problems and metrics to Stackdriver Monitoring API. | disable_stackdriver_exporter |
--version
: Print current version of node-problem-detector.--hostname-override
: A customized node name used for node-problem-detector to update conditions and emit events. node-problem-detector gets node name first fromhostname-override
, thenNODE_NAME
environment variable and finally fall back toos.Hostname
.
--config.system-log-monitor
: List of paths to system log monitor configuration files, comma-separated, e.g. config/kernel-monitor.json. Node problem detector will start a separate log monitor for each configuration. You can use different log monitors to monitor different system logs.
--config.system-stats-monitor
: List of paths to system stats monitor config files, comma-separated, e.g. config/system-stats-monitor.json. Node problem detector will start a separate system stats monitor for each configuration. You can use different system stats monitors to monitor different problem-related system stats.
--config.custom-plugin-monitor
: List of paths to custom plugin monitor config files, comma-separated, e.g. config/custom-plugin-monitor.json. Node problem detector will start a separate custom plugin monitor for each configuration. You can use different custom plugin monitors to monitor different node problems.
Health checkers are configured as custom plugins, using the config/health-checker-*.json config files.
--enable-k8s-exporter
: Enables reporting to Kubernetes API server, default totrue
.--apiserver-override
: A URI parameter used to customize how node-problem-detector connects the apiserver. This is ignored if--enable-k8s-exporter
isfalse
. The format is the same as thesource
flag of Heapster. For example, to run without auth, use the following config:Refer to heapster docs for a complete list of available options.http://APISERVER_IP:APISERVER_PORT?inClusterConfig=false
--address
: The address to bind the node problem detector server.--port
: The port to bind the node problem detector server. Use 0 to disable.
--prometheus-address
: The address to bind the Prometheus scrape endpoint, default to127.0.0.1
.--prometheus-port
: The port to bind the Prometheus scrape endpoint, default to 20257. Use 0 to disable.
--exporter.stackdriver
: Path to a Stackdriver exporter config file, e.g. config/exporter/stackdriver-exporter.json, defaults to empty string. Set to empty string to disable.
-
--system-log-monitors
: List of paths to system log monitor config files, comma-separated. This option is deprecated, replaced by--config.system-log-monitor
, and will be removed. NPD will panic if both--system-log-monitors
and--config.system-log-monitor
are set. -
--custom-plugin-monitors
: List of paths to custom plugin monitor config files, comma-separated. This option is deprecated, replaced by--config.custom-plugin-monitor
, and will be removed. NPD will panic if both--custom-plugin-monitors
and--config.custom-plugin-monitor
are set.
You can enable node tainting feature to the response of permanent node problems. For example, on the file config/kernel-monitor.json,
put a TaintConfig
object as following for required Condition
as you need. You can omit the TaintConfig
or disable it by setting enabled
as false. By default, it is disabled and will not be enabled until you need it.
{
"type": "ReadonlyFilesystem",
"reason": "FilesystemIsNotReadOnly",
"message": "Filesystem is not read-only",
"taintConfig": {
"enabled": false,
"key": "node-problem-detector/read-only-filesystem",
"value": "true",
"effect": "NoSchedule"
}
}
-
Install development dependencies for
libsystemd
and the ARM GCC toolchain- Debian/Ubuntu:
apt install libsystemd-dev gcc-aarch64-linux-gnu
- Debian/Ubuntu:
-
git clone git@github.com:kubernetes/node-problem-detector.git
-
Run
make
in the top directory. It will:- Build the binary.
- Build the docker image. The binary and
config/
are copied into the docker image.
If you do not need certain categories of problem daemons, you could choose to disable them at compilation time. This is the
best way of keeping your node-problem-detector runtime compact without unnecessary code (e.g. global
variables, goroutines, etc). You can do so via setting the BUILD_TAGS
environment variable
before running make
. For example:
BUILD_TAGS="disable_custom_plugin_monitor disable_system_stats_monitor" make
The above command will compile the node-problem-detector without Custom Plugin Monitor and System Stats Monitor. Check out the Problem Daemon section to see how to disable each problem daemon during compilation time.
make push
uploads the docker image to a registry. By default, the image will be uploaded to
staging-k8s.gcr.io
. It's easy to modify the Makefile
to push the image
to another registry.
The easiest way to install node-problem-detector into your cluster is to use the Helm chart:
helm repo add deliveryhero https://charts.deliveryhero.io/
helm install --generate-name deliveryhero/node-problem-detector
Alternatively, to install node-problem-detector manually:
-
Edit node-problem-detector.yaml to fit your environment. Set
log
volume to your system log directory (used by SystemLogMonitor). You can use a ConfigMap to overwrite theconfig
directory inside the pod. -
Edit node-problem-detector-config.yaml to configure node-problem-detector.
-
Edit rbac.yaml to fit your environment.
-
Create the ServiceAccount and ClusterRoleBinding with
kubectl create -f rbac.yaml
. -
Create the ConfigMap with
kubectl create -f node-problem-detector-config.yaml
. -
Create the DaemonSet with
kubectl create -f node-problem-detector.yaml
.
To run node-problem-detector standalone, you should set inClusterConfig
to false
and
teach node-problem-detector how to access apiserver with apiserver-override
.
To run node-problem-detector standalone with an insecure apiserver connection:
node-problem-detector --apiserver-override=http://APISERVER_IP:APISERVER_INSECURE_PORT?inClusterConfig=false
For more scenarios, see here
Node Problem Detector has preliminary support Windows. Most of the functionality has not been tested but filelog plugin works.
Follow Issue #461 for development status of Windows support.
To develop NPD on Windows you'll need to setup your Windows machine for Go development. Install the following tools:
- Git for Windows
- Go
- Visual Studio Code
- Make
- mingw-64 WinBuilds
- Tested with x86-64 Windows Native mode.
- Add the
$InstallDir\bin
to WindowsPATH
variable.
# Run these commands in the node-problem-detector directory.
# Build in MINGW64 Window
make clean ENABLE_JOURNALD=0 build-binaries
# Test in MINGW64 Window
make test
# Run with containerd log monitoring enabled in Command Prompt. (Assumes containerd is installed.)
%CD%\output\windows_amd64\bin\node-problem-detector.exe --logtostderr --enable-k8s-exporter=false --config.system-log-monitor=%CD%\config\windows-containerd-monitor-filelog.json --config.system-stats-monitor=config\windows-system-stats-monitor.json
# Configure NPD to run as a Windows Service
sc.exe create NodeProblemDetector binpath= "%CD%\node-problem-detector.exe [FLAGS]" start= demand
sc.exe failure NodeProblemDetector reset= 0 actions= restart/10000
sc.exe start NodeProblemDetector
You can try node-problem-detector in a running cluster by injecting messages to the logs that node-problem-detector is watching. For example, Let's assume node-problem-detector is using KernelMonitor. On your workstation, run kubectl get events -w
. On the node, run sudo sh -c "echo 'kernel: BUG: unable to handle kernel NULL pointer dereference at TESTING' >> /dev/kmsg"
. Then you should see the KernelOops
event.
When adding new rules or developing node-problem-detector, it is probably easier to test it on the local workstation in the standalone mode. For the API server, an easy way is to use kubectl proxy
to make a running cluster's API server available locally. You will get some errors because your local workstation is not recognized by the API server. But you should still be able to test your new rules regardless.
For example, to test KernelMonitor rules:
make
(build node-problem-detector locally)kubectl proxy --port=8080
(make a running cluster's API server available locally)- Update KernelMonitor's
logPath
to your local kernel log directory. For example, on some Linux systems, it is/run/log/journal
instead of/var/log/journal
. ./bin/node-problem-detector --logtostderr --apiserver-override=http://127.0.0.1:8080?inClusterConfig=false --config.system-log-monitor=config/kernel-monitor.json --config.system-stats-monitor=config/system-stats-monitor.json --port=20256 --prometheus-port=20257
(or point to any API server address:port and Prometheus port)sudo sh -c "echo 'kernel: BUG: unable to handle kernel NULL pointer dereference at TESTING' >> /dev/kmsg"
- You can see
KernelOops
event in the node-problem-detector log. sudo sh -c "echo 'kernel: INFO: task docker:20744 blocked for more than 120 seconds.' >> /dev/kmsg"
- You can see
DockerHung
event and condition in the node-problem-detector log. - You can see
DockerHung
condition at http://127.0.0.1:20256/conditions. - You can see disk-related system metrics in Prometheus format at http://127.0.0.1:20257/metrics.
Note:
- You can see more rule examples under test/kernel_log_generator/problems.
- For KernelMonitor message injection, all messages should have
kernel:
prefix (also note there is a space after:
); or use generator.sh. - To inject other logs into journald like systemd logs, use
echo 'Some systemd message' | systemd-cat -t systemd
.
node-problem-detector uses go modules
to manage dependencies. Therefore, building node-problem-detector requires
golang 1.11+. It still uses vendoring. See the
Kubernetes go modules KEP
for the design decisions. To add a new dependency, update go.mod and
run go mod vendor
.
A remedy system is a process or processes designed to attempt to remedy problems detected by the node-problem-detector. Remedy systems observe events and/or node conditions emitted by the node-problem-detector and take action to return the Kubernetes cluster to a healthy state. The following remedy systems exist:
- Draino automatically drains Kubernetes nodes based on labels and node conditions. Nodes that match all of the supplied labels and any of the supplied node conditions will be prevented from accepting new pods (aka 'cordoned') immediately, and drained after a configurable time. Draino can be used in conjunction with the Cluster Autoscaler to automatically terminate drained nodes. Refer to this issue for an example production use case for Draino.
- Descheduler strategy RemovePodsViolatingNodeTaints evicts pods violating NoSchedule taints on nodes. The k8s scheduler's TaintNodesByCondition feature must be enabled. The Cluster Autoscaler can be used to automatically terminate drained nodes.
- mediK8S is an umbrella project for automatic remediation system build on Node Health Check Operator (NHC) that monitors node conditions and delegates remediation to external remediators using the Remediation API.Poison-Pill is a remediator that will reboot the node and make sure all statefull workloads are rescheduled. NHC supports conditionally remediating if the cluster has enough healthy capacity, or manually pausing any action to minimze cluster disruption.
- MachineHealthCheck of Cluster API are responsible for remediating unhealthy Machines.
NPD is tested via unit tests, NPD e2e tests, Kubernetes e2e tests and Kubernetes nodes e2e tests. Prow handles the pre-submit tests and CI tests.
CI test results can be found below:
Unit tests are run via make test
.
See NPD e2e test documentation for how to set up and run NPD e2e tests.
Problem maker is a program used in NPD e2e tests to generate/simulate node problems. It is ONLY intended to be used by NPD e2e tests. Please do NOT run it on your workstation, as it could cause real node problems.
Node problem detector's architecture has been fairly stable. Recent versions (v0.8.13+) should be able to work with any supported kubernetes versions.