copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-09-05 |
kubernetes, iks |
containers |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}
{: #ingress_health}
Customize logging and set up monitoring to help you troubleshoot issues and improve the performance of your Ingress configuration. {: shortdesc}
{: #ingress_logs}
If you want to troubleshoot your Ingress or monitor Ingress activity, you can review the Ingress logs. {: shortdesc}
Logs are automatically collected for your Ingress ALBs. To view the ALB logs, choose between two options.
- Create a logging configuration for the Ingress service in your cluster.
- Check the logs from the CLI. Note: You must have at least the Reader {{site.data.keyword.cloud_notm}} IAM service role for the
kube-system
namespace.-
Get the ID of a pod for an ALB.
kubectl get pods -n kube-system | grep alb
{: pre}
-
Open the logs for that ALB pod.
kubectl logs <ALB_pod_ID> nginx-ingress -n kube-system
{: pre}
-
The default Ingress log content is formatted in JSON and displays common fields that describe the connection session between a client and your app. An example log with the default fields looks like the following:
{"time_date": "2018-08-21T17:33:19+00:00", "client": "108.162.248.42", "host": "albhealth.multizone.us-south.containers.appdomain.cloud", "scheme": "http", "request_method": "GET", "request_uri": "/", "request_id": "c2bcce90cf2a3853694da9f52f5b72e6", "status": 200, "upstream_addr": "192.168.1.1:80", "upstream_status": 200, "request_time": 0.005, "upstream_response_time": 0.005, "upstream_connect_time": 0.000, "upstream_header_time": 0.005}
{: screen}
{: #ingress_log_format}
You can customize the content and format of logs that are collected for the Ingress ALB. {:shortdesc}
By default, Ingress logs are formatted in JSON and display common log fields. However, you can also create a custom log format by choosing which log components are forwarded and how the components are arranged in the log output
Before you begin, ensure that you have the Writer or Manager {{site.data.keyword.cloud_notm}} IAM service role for the kube-system
namespace.
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Add a
data
section. Add thelog-format
field and optionally, thelog-format-escape-json
field.apiVersion: v1 data: log-format: '{<key1>: <log_variable1>, <key2>: <log_variable2>, <key3>: <log_variable3>}' log-format-escape-json: "true" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
YAML file components Understanding the log-format configuration log-format
Replace <key>
with the name for the log component and<log_variable>
with a variable for the log component that you want to collect in log entries. You can include text and punctuation that you want the log entry to contain, such as quotation marks around string values and commas to separate log components. For example, formatting a component likerequest: "$request"
generates the following in a log entry:request: "GET / HTTP/1.1"
. For a list of all the variables you can use, see the NGINX variable index.
To log an additional header such as x-custom-ID, add the following key-value pair to the custom log content:customID: $http_x_custom_id
Hyphens (-
) are converted to underscores (_
) and$http_
must be prepended to the custom header name.log-format-escape-json
Optional: By default, logs are generated in text format. To generate logs in JSON format, add the log-format-escape-json
field and use valuetrue
.For example, your log format might contain the following variables:
apiVersion: v1 data: log-format: '{remote_address: $remote_addr, remote_user: "$remote_user", time_date: [$time_local], request: "$request", status: $status, http_referer: "$http_referer", http_user_agent: "$http_user_agent", request_id: $request_id}' kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: screen}
A log entry according to this format looks like the following example:
remote_address: 127.0.0.1, remote_user: "dbmanager", time_date: [30/Mar/2018:18:52:17 +0000], request: "GET / HTTP/1.1", status: 401, http_referer: "-", http_user_agent: "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0", request_id: a02b2dea9cf06344a25611c1d7ad72db
{: screen}
To create a custom log format that is based on the default format for ALB logs, modify the following section as needed and add it to your configmap:
apiVersion: v1 data: log-format: '{"time_date": "$time_iso8601", "client": "$remote_addr", "host": "$http_host", "scheme": "$scheme", "request_method": "$request_method", "request_uri": "$uri", "request_id": "$request_id", "status": $status, "upstream_addr": "$upstream_addr", "upstream_status": $upstream_status, "request_time": $request_time, "upstream_response_time": $upstream_response_time, "upstream_connect_time": $upstream_connect_time, "upstream_header_time": $upstream_header_time}' log-format-escape-json: "true" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}
-
To view the Ingress ALB logs, choose between two options.
- Create a logging configuration for the Ingress service in your cluster.
- Check the logs from the CLI.
-
Get the ID of a pod for an ALB.
kubectl get pods -n kube-system | grep alb
{: pre}
-
Open the logs for that ALB pod. Verify that logs follow the updated format.
kubectl logs <ALB_pod_ID> nginx-ingress -n kube-system
{: pre}
-
{: #ingress_monitoring}
Monitor your ALBs by deploying a metrics exporter and Prometheus agent into your cluster. {: shortdesc}
The ALB metrics exporter uses the NGINX directive, vhost_traffic_status_zone
, to collect metrics data from the /status/format/json
endpoint on each Ingress ALB pod. The metrics exporter automatically reformats each data field in the JSON file into a metric that is readable by Prometheus. Then, a Prometheus agent picks up the metrics that are produced by the exporter and makes the metrics visible on a Prometheus dashboard.
{: #metrics-exporter}
Install the metrics exporter Helm chart to monitor an ALB in your cluster. {: shortdesc}
The ALB metrics exporter pods must deploy to the same worker nodes that your ALBs are deployed to. If your ALBs run on edge worker nodes, and those edge nodes are tainted to prevent other workload deployments, the metrics exporter pods cannot be scheduled. You must remove the taints by running kubectl taint node <node_name> dedicated:NoSchedule- dedicated:NoExecute-
.
{: note}
-
Important: Follow the instructions to install the Helm client on your local machine, install the Helm server (tiller) with a service account, and add the {{site.data.keyword.cloud_notm}} Helm repositories.
-
Install the
ibmcloud-alb-metrics-exporter
Helm chart to your cluster. This Helm chart deploys an ALB metrics exporter and creates analb-metrics-service-account
service account in thekube-system
namespace. Replace<zone>
with the zone where the ALB exists and<alb_ID>
with the ID of the ALB that you want to collect metrics for. To view the IDs for the ALBs in your cluster, runibmcloud ks alb ls --cluster <cluster_name>
.
helm install iks-charts/ibmcloud-alb-metrics-exporter --set metricsNameSpace=kube-system --set name=alb-<zone>-metrics-exporter --set albId=<alb_ID> --set albZone=<zone>
{: pre}
- Check the chart deployment status. When the chart is ready, the STATUS field near the beginning of the output has a value of
DEPLOYED
.
helm status ibmcloud-alb-metrics-exporter
{: pre}
- Verify that the
ibmcloud-alb-metrics-exporter
pods are running.
kubectl get pods -n kube-system -o wide
{:pre}
Example output:
NAME READY STATUS RESTARTS AGE IP NODE
...
alb-metrics-exporter-868fddf777-d49l5 1/1 Running 0 19s 172.30.xxx.xxx 10.xxx.xx.xxx
alb-metrics-exporter-868fddf777-pf7x5 1/1 Running 0 19s 172.30.xxx.xxx 10.xxx.xx.xxx
{:screen}
-
Repeat steps 2 - 4 for each ALB in your cluster.
-
Optional: Install the Prometheus agent to pick up the metrics that are produced by the exporter and make the metrics visible on a Prometheus dashboard.
{: #prometheus-agent}
After you install the metrics exporter, you can install the Prometheus agent Helm chart to pick up the metrics that are produced by the exporter and make the metrics visible on a Prometheus dashboard. {: shortdesc}
-
Download the TAR file for the metrics exporter Helm chart from https://icr.io/helm/iks-charts/charts/ibmcloud-alb-metrics-exporter-1.0.7.tgz
-
Navigate to the Prometheus subfolder.
cd ibmcloud-alb-metrics-exporter-1.0.7.tar/ibmcloud-alb-metrics-exporter/subcharts/prometheus
{: pre}
- Install the Prometheus Helm chart to your cluster. Replace <ingress_subdomain> with the Ingress subdomain for your cluster. The URL for the Prometheus dashboard is a combination of the default Prometheus subdomain,
prom-dash
, and your Ingress subdomain, for exampleprom-dash.mycluster-12345.us-south.containers.appdomain.cloud
. To find the Ingress subdomain for your cluster, runibmcloud ks cluster get --cluster <cluster_name>
.
helm install --name prometheus . --set nameSpace=kube-system --set hostName=prom-dash.<ingress_subdomain>
{: pre}
-
Check the chart deployment status. When the chart is ready, the STATUS field near the beginning of the output has a value of
DEPLOYED
.helm status prometheus
{: pre}
-
Verify that the
prometheus
pod is running.kubectl get pods -n kube-system -o wide
{:pre}
Example output:
NAME READY STATUS RESTARTS AGE IP NODE alb-metrics-exporter-868fddf777-d49l5 1/1 Running 0 19s 172.30.xxx.xxx 10.xxx.xx.xxx alb-metrics-exporter-868fddf777-pf7x5 1/1 Running 0 19s 172.30.xxx.xxx 10.xxx.xx.xxx prometheus-9fbcc8bc7-2wvbk 1/1 Running 0 1m 172.30.xxx.xxx 10.xxx.xx.xxx
{:screen}
-
In a browser, enter the URL for the Prometheus dashboard. This hostname has the format
prom-dash.mycluster-12345.us-south.containers.appdomain.cloud
. The Prometheus dashboard for your ALB opens. -
Review more information about the ALB, server, and upstream metrics listed in the dashboard.
{: #alb_metrics}
The alb-metrics-exporter
automatically reformats each data field in the JSON file into a metric that is readable by Prometheus. ALB metrics collect data on the connections and responses the ALB is handling.
{: shortdesc}
ALB metrics are in the format kube_system_<ALB-ID>_<METRIC-NAME> <VALUE>
. For example, if an ALB receives 23 responses with 2xx-level status codes, the metric is formatted as kube_system_public_crf02710f54fcc40889c301bfd6d5b77fe_alb1_totalHandledRequest {.. metric="2xx"} 23
where metric
is the prometheus label.
The following table lists the supported ALB metric names with the metric labels in the format <ALB_metric_name>_<metric_label>
{: #server_metrics}
The alb-metrics-exporter
automatically reformats each data field in the JSON file into a metric that is readable by Prometheus. Server metrics collect data on the subdomain that are defined in an Ingress resource; for example, dev.demostg1.stg.us.south.containers.appdomain.cloud
.
{: shortdesc}
Server metrics are in the format kube_system_server_<ALB-ID>_<SUB-TYPE>_<SERVER-NAME>_<METRIC-NAME> <VALUE>
.
<SERVER-NAME>_<METRIC-NAME>
are formatted as labels. For example, albId="dev_demostg1_us-south_containers_appdomain_cloud",metric="out"
For example, if the server sent a total of 22319 bytes to clients, the metric is formatted as:
kube_system_server_public_cra6a6eb9e897e41c4a5e58f957b417aec_alb1_bytes{albId="dev_demostg1_us-south_containers_appdomain_cloud",app="alb-metrics-exporter",instance="172.30.140.68:9913",job="kubernetes-pods",kubernetes_namespace="kube-system",kubernetes_pod_name="alb-metrics-exporter-7d495d785c-8wfw4",metric="out",pod_template_hash="3805183417"} 22319
{: screen}
The following table lists the supported server metric names.
{: #upstream_metrics}
The alb-metrics-exporter
automatically reformats each data field in the JSON file into a metric that is readable by Prometheus. Upstream metrics collect data on the back-end service that is defined in an Ingress resource.
{: shortdesc}
Upstream metrics are formatted in two ways.
- Type 1 includes the upstream service name.
- Type 2 includes the upstream service name and a specific upstream pod IP address.
{: #type_one}
Upstream type 1 metrics are in the format kube_system_upstream_<ALB-ID>_<SUB-TYPE>_<UPSTREAM-NAME>_<METRIC-NAME> <VALUE>
.
{: shortdesc}
<UPSTREAM-NAME>_<METRIC-NAME>
are formatted as labels. For example, albId="default-cafe-ingress-dev_demostg1_us-south_containers_appdomain_cloud-coffee-svc",metric="in"
For example, if the upstream service received a total of 1227 bytes from the ALB, the metric is formatted as:
kube_system_upstream_public_cra6a6eb9e897e41c4a5e58f957b417aec_alb1_bytes{albId="default-cafe-ingress-dev_demostg1_us-south_containers_appdomain_cloud-coffee-svc",app="alb-metrics-exporter",instance="172.30.140.68:9913",job="kubernetes-pods",kubernetes_namespace="kube-system",kubernetes_pod_name="alb-metrics-exporter-7d495d785c-8wfw4",metric="in",pod_template_hash="3805183417"} 1227
{: screen}
The following table lists the supported upstream type 1 metric names.
{: #type_two}
Upstream type 2 metrics are in the format kube_system_upstream_<ALB-ID>_<METRIC-NAME>_<UPSTREAM-NAME>_<POD-IP> <VALUE>
.
{: shortdesc}
<UPSTREAM-NAME>_<POD-IP>
are formatted as labels. For example, albId="default-cafe-ingress-dev_dev_demostg1_us-south_containers_appdomain_cloud-tea-svc",backend="172_30_75_6_80"
For example, if the upstream service has an average request processing time (including upstream) of 40 milliseconds, the metric is formatted as:
kube_system_upstream_public_cra6a6eb9e897e41c4a5e58f957b417aec_alb1_requestMsec{albId="default-cafe-ingress-dev_dev_demostg1_us-south_containers_appdomain_cloud-tea-svc",app="alb-metrics-exporter",backend="172_30_75_6_80",instance="172.30.75.3:9913",job="kubernetes-pods",kubernetes_namespace="kube-system",kubernetes_pod_name="alb-metrics-exporter-7d495d785c-swkls",pod_template_hash="3805183417"} 40
{: screen}
The following table lists the supported upstream type 2 metric names.
{: #vts_zone_size}
Shared memory zones are defined so that worker processes can share information such as cache, session persistence, and rate limits. A shared memory zone, called the virtual host traffic status zone, is set up for Ingress to collect metrics data for an ALB. {:shortdesc}
In the ibm-cloud-provider-ingress-cm
Ingress configmap, the vts-status-zone-size
field sets the size of the shared memory zone for metrics data collection. By default, vts-status-zone-size
is set to 10m
. If you have a large environment that requires more memory for metrics collection, you can override the default to instead use a larger value by following these steps.
Before you begin, ensure that you have the Writer or Manager {{site.data.keyword.cloud_notm}} IAM service role for the kube-system
namespace.
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Change the value of
vts-status-zone-size
from10m
to a larger value.apiVersion: v1 data: vts-status-zone-size: "10m" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}