vm_memory_high_watermark.relative is not aware of cgroups limits of the container on Kubernetes #10975
-
Describe the bugRabbitmq does not consider the container memory limit and it is using the total host memory Reproduction stepsI run the container in a Kubernetes node with 16Gi
Expected behaviorI expect to see this watermark log: Instead I see: Additional contextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
This is not a bug in RabbitMQ, and has been discussed many times before.
Use The list of Kubernetes-specific memory footprint and monitoring issues does not end there, in particular for clusters where streams and super streams are used. |
Beta Was this translation helpful? Give feedback.
-
This doc guide is where I'll add a Kubernetes-specific note to both because this question keeps coming up. |
Beta Was this translation helpful? Give feedback.
This is not a bug in RabbitMQ, and has been discussed many times before.
vm_memory_high_watermark.relative
is not aware of anything Kubernetes-related. Its behavior completely depends on whether the runtime can determine the cgroups-capped value, and there are many factors that affect that, including the version of Kubernetes, the image OS, whether cgroups v2 are enabled, and so on.Use
vm_memory_high_watermark.absolute
, which starting with 3.13.x even supports the same information unit suffixes as Kubernetes.The list of Kubernetes-specific memory footprint and monitoring issues does not end there, in particular for clusters where streams and super streams are used.