Quorum queues requeue messages to the back of the queue #10500
-
Describe the bugTested in Docker Hub images: 3.10.7-management, 3.12.12-management. NACKed messages are returned on queue, but are tailed to the end of current bunch of undelivered messages. It's not expected behaviour. Reproduction steps
NACKed messages are returned on queue, but are tailed to the end of current undelivered messages. It's not expected behaviour. Log from application (already 3 published earlier messages, waiting on idle queue before connect to server):
Expected behaviorNACKed message should return to its place in the queue, i.e. at the beginning. Next message after previously NACKed message should be the same message again, to prevent FIFO order. Delivery of the same message should be continued until message will be positively ACKed by client, as described in https://blog.rabbitmq.com/posts/2020/06/quorum-queues-local-delivery/:
As described above: SAC is used on queue declaration, exclusive consumer is declared, and QoS prefetch count is set to 1. Additional contextFull code of Go program:
Value (true/false) on "Exclusive" flag on Consume() method has no effect if changed. If queue is declared without "x-single-active-consumer = true" argument - it also has no effect. Unlike the QUORUM queue, CLASSIC queue works as expected (also 3 messages on queue, the same code used as above, the one and only change is in first parameter of Consume() - queue name). Log:
There was 3 messages with "payload1", "payload2" and "payload3" payloads. Tcpdump is captured, showing that client is sending NACKs, but server performs delivery of next message in response: Client is declaring of consuming messages from q_quorum_sac queue: RabbitMQ delivers first message with "quorumsac1" payload (delivery tag: 1): |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
This is a documented behavior and not a bug. Quorum queues cannot keep accumulating messages at the head of the queue, because in this case they cannot move on to truncate the log, eventually running the node out of disk space. This fairly common scenario and a fundamental problem where QQs cannot free up disk space because of a certain sequence of (intentional or not) client operations is why consumer delivery timeout was introduced. These two changes make quorum queues a lot more resilient in the face of certain (usually repeated) operations performed by applications with curious design decisions (around how they handle deliveries and acknowledge them). |
Beta Was this translation helpful? Give feedback.
-
In general, the idea of "FIFO" when you requeue messages and have multiple (concurrent, competing) consumers becomes a moot point. Deliveries will effectively "jump" their queue position in such cases, even if you requeue to the head, because of the competing consumer scenario, QoS allowing for multiple deliveries at a time, and so on. In the presence of concurrent modifications, FIFO becomes FIFO-ish. Therefore the classic queue behavior of (trying it) requeue to the original position in the head is mostly an illusion with a lot of workloads. |
Beta Was this translation helpful? Give feedback.
This is a documented behavior and not a bug. Quorum queues cannot keep accumulating messages at the head of the queue, because in this case they cannot move on to truncate the log, eventually running the node out of disk space.
This fairly common scenario and a fundamental problem where QQs cannot free up disk space because of a certain sequence of (intentional or not) client operations is why consumer delivery timeout was introduced. These two changes make quorum queues a lot more resilient in the face of certain (usually repeated) operations performed by appli…