-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NATS cluster has different values of the same metric #218
Comments
Hey, |
Same behaviour for us with |
Hello @wallyqs, sorry for ping you, but in general, it is difficult to understand which server displays the real information.
and in this case nats-1 is the leader.
and it's difficult to say what exactly is true, since the leader displays 8, but the other two servers are 0. |
A few new points, I used the promql query: and the second point is that when I try to restart the prometheus-nats-exporter container inside the nats server pod(with metric differences) by: |
as far as I understand, when using the |
after testing, it turned out that the nats pod, which is consumer_leader at the moment, always shows the correct value for pending messages and for ack pending messages. I added the label
it will always be triggered only when the current values are. @jlange-koch, |
question about NATS Jetstream metrics, we deployed NATS in k8s, using helm chart and metrics are collected using exporter(prometheus-nats-exporter:0.10.1) in each nats pod.
NATS cluster consists of three pods and
nats_consumer_num_pending
metric shows this result:the same situation with
nats_consumer_delivered_consumer_seq
metric, it differs between pods. It is possible that there is a difference with other metrics too, but I noticed only this. There are 3 NATS servers in the cluster and replication is set to 3, therefore, the metrics should be the same.I want to set up alerts by these metrics and try to understand why there is such a difference and how to fix it.
Stream settings:
nats stream info STREAM -j
nats consumer info STREAM test-consumer -j
The text was updated successfully, but these errors were encountered: