-
Notifications
You must be signed in to change notification settings - Fork 751
Monitoring
Both the web application and indexer can be monitored using built in support for metering. The metrics are available by various means.
The monitoring API endpoints do not go through authorization checks. Also they are not restricted to localhost.
The meters are not per project on purpose. Firstly, to avoid metric cardinality explosion, secondly not to leak any private information (given the above mentioned lack of authorization).
Keep in mind that the names of the meters are very much volatile right now. They will stabilize over time.
Docker based demo is on https://github.com/OpenGrok/opengrok-monitoring-docker
/metrics/prometheus
serves metrics in the Prometheus format. If the web application is running on https://example.com/source/
then the URL for the metrics will be https://example.com/source/metrics/prometheus
Insert this snippet into /etc/prometheus/prometheus.yml
:
- job_name: opengrok
metrics_path: '/source/metrics/prometheus'
static_configs:
# replace with actual server name and port (defaults to HTTP)
- targets: ['localhost:8080']
and reload the Prometheus configuration. The web application metrics will become available.
Notable metrics start with:
jvm
authorization
requests
The indexer metrics are exported in StatsD format.
Here's example of complete read-only configuration:
<?xml version="1.0" encoding="UTF-8"?>
<java version="11.0.4" class="java.beans.XMLDecoder">
<object class="org.opengrok.indexer.configuration.Configuration" id="Configuration0">
<void property="statsdConfig">
<void property="port">
<int>8125</int>
</void>
<void property="host">
<string>localhost</string>
</void>
<void property="flavor">
<object class="java.lang.Enum" method="valueOf">
<class>io.micrometer.statsd.StatsdFlavor</class>
<string>ETSY</string>
</object>
</void>
</void>
</object>
</java>
Configurable options:
name | type | value |
---|---|---|
port |
int |
UDP port number |
host |
String |
hostname |
flavor |
StatsdFlavor enum |
type of statsd export |
The set of Micrometer built in meters is the same as for the web application.
The StatsD export is setup using buffered output and sent via UDP to the host/port in the configuration. Even with buffering on this can generate some heavy traffic.
If the indexer is run in per project mode, a projects
tag is added to all the metrics with a value containing project names separated with commas.
To use statsd with Prometheus run the statsd-exporter like so with Docker:
#!/bin/bash
docker run --name=prom-statsd-exporter \
-p 9123:9102 \
-p 8125:8125/udp \
-v $PWD/mapping.yml:/tmp/mapping.yml \
prom/statsd-exporter \
--statsd.mapping-config=/tmp/mapping.yml \
--statsd.listen-udp=:8125 \
--web.listen-address=:9123
The configuration in /tmp/mapping.yml
can look like this:
# Configure mappings for statsd Prometheus exporter.
# This is used to expose metrics fromt the OpenGrok indexer.
mappings:
# usage:
# jvmMemoryUsed.area.nonheap.id.Compressed_Class_Space.statistic.value:XYZ
- match: "jvmMemoryUsed.area.*.id.*.statistic.value"
name: "indexer_jvm_memory_used"
labels:
area: "$1"
id: "$2"
- match: "processCpuUsage.statistic.value"
name: "indexer_cpu_usage"
# usage: jvmThreadsStates.state.waiting.statistic.value
- match: "jvmThreadsStates.state.*.statistic.value"
name: "indexer_thread_states"
labels:
state: "$1"
This will map statsd metric names to native Prometheus metric names with tags.
Once the statsd exporter is running and starts receiving Statsd datagrams from the indexer, the metrics can be queried on http://localhost:9123/metrics
The Prometheus config snippet can look like this:
- job_name: 'statsd'
static_configs:
- targets: ['localhost:9123']
labels: {'host': 'localhost'}
The Grafana metrics will be called indexer_jvm_memory_used
, indexer_cpu_usage
, indexer_thread_states
.