Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase publish performance of small message ~500% #463

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

carlhoerberg
Copy link

By writing all 3 frames required for a publish in one go, and only
locking and flushing the output buffer once we increase the performance
about 3 times.

Messages that spans over multiple body frames are still written one at a
time, when messages are that large (>128KB) the locking and flushing is
not the bottleneck, and it allows us to not allocate a dynamic array for
each publish and allows us to use a fixed size array instead.

By disabling TCP no delay (enable Nagle's algorithm) many small messages
can be sent in a single TCP packet, we increase the publish rate about
100% when messages are small.

This will not increase latency in any normal circumstances, but
theoretically could if for instance one channel is publishing a message,
and another channel is declaring a queue and then waiting for the CreateOK
response, then due to a delayed ack from the server a 40ms delay could
be added to the wait of the Queue CreateOK.

By writing all 3 frames required for a publish in one go, and only
locking and flushing the output buffer once we increase the performance
about 3 times.

Messages that spans over multiple body frames are still written one at a
time, when messages are that large (>128KB) the locking and flushing is
not the bottleneck, and it allows us to not allocate a dynamic array for
each publish and allows us to use a fixed size array instead.
…sage

By disabling TCP no delay (enable Nagle's algorithm) many small messages
can be sent in a single TCP packet, we increase the publish rate about
100% when messages are small.

This will not increase latency in any normal circumstances, but
theoretically could if for instance one channel is publishing a message,
and another channel is declaring a queue and then waiting for the CreateOK
response, then due to a delayed ack from the server a 40ms delay could
be added to the wait of the Queue CreateOK.
Copy link
Collaborator

@michaelklishin michaelklishin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lumps together two unrelated changes. The second part of this PR is highly controversial. There are few things around networking that spark endless debates that the Nagle's algorithm and other settings related to TCP acks. Combining the two together honestly makes this pull request DOA because the risk of pissing off a large group of users is too high to accept it.

I'm curious if @streadway has an opinion on what the default should be.

@michaelklishin
Copy link
Collaborator

RabbitMQ Java client sets TCP_NODELAY to true by default.
RabbitMQ .NET client sets TCP_NODELAY to true by default
FWIW Netty sets it to true by default, and it's a key building block for a lot of network protocol clients our there.

So I am not convinced this client should adopt a different default. It can make it easy for the user to control the value. That would be something pretty unrelated to how framesets are written to the socket.

@michaelklishin
Copy link
Collaborator

Hey folks,

I'm posting this on behalf of the core team.

As you have noticed, this client hasn't seen a lot of activity recently.
Many users are unhappy about that and we fully recognize that it's a popular
library that should be maintained more actively. There are also many community
members who have contributed pull requests and haven't been merged for various reasons.

Because this client has a long tradition of "no breaking public API changes", certain
reasonable changes will likely never be accepted. This is frustrating to those who
have put in their time and effort into trying to improve this library.

We would like to thank @streadway
for developing this client and maintaining it for a decade — that's a remarkable contribution
to the RabbitMQ ecosystem. We this now is a good time to get more contributors
involved.

Team RabbitMQ has adopted a "hard fork" of this client
in order to give the community a place to evolve the API. Several RabbitMQ core team members
will participate but we think it very much should be a community-driven effort.

What do we mean by "hard fork" and what does it mean for you? The entire history of the project
is retained in the new repository but it is not a GitHub fork by design. The license remains the same
2-clause BSD. The contribution process won't change much (except that we hope to review and accept PRs
reasonably quickly).

What does change is that this new fork will accept reasonable breaking API changes according
to Semantic Versioning (or at least our understanding of it). At the moment the API is identical
to that of streadway/amqp but the package name is different. We will begin reviewing PRs
and merging them if they make sense in the upcoming weeks.

If your PR hasn't been accepted or reviewed, you are welcome to re-submit it for rabbitmq/amqp091-go.
RabbitMQ core team members will evaluate the PRs currently open for streadway/amqp as time allows,
and pull those that don't have any conflicts. We cannot promise that every PR would be accepted
but at least we are open to changing the API going forward.

Note that it is a high season for holidays in some parts of the world, so we may be slower
to respond in the next few weeks but otherwise, we are eager to review as many currently open PRs
as practically possible soon.

Thank you for using RabbitMQ and contributing to this client. On behalf of the RabbitMQ core team,
@ChunyiLyu and @michaelklishin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants