-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Acknowledgements lost on bookies and brokers restart resulting in messages not being delivered #22709
Comments
It looks a little strange, did you handle the send message result? I mean, what do you do when send message failed? It looks you don't handle the send message result, if there are some messages failed to send, you should handle the exception and retry again In general, Pulsar do not retry to send message automatically, it's user's responsibility |
I think it's not the case, because we are retrying. We handle message result and retry message if it was not sent correctly. During this test which I described we are sending messages till we won't have 1000000 correctly sent. So we are sure that on input we have 1000000 messages. |
Did you mean you send 1000000 messages, the results are succeed, but after restart broker/bookie, some of them lost? |
I mean we send 1000000 messages and during that time when our script was sending messages we restarted all bookies and brokers and then we were able to observe that some of messages lost. It was random situation for us, so we had to perform this test few times to observe that message loss. |
can I see your script? |
I simplified our infrastructure a bit just to show our problem. To be more accurate our test script sends data to the application using websocket and our application sends messages to the topic. Our application responds with either ok or error results and in the case of error the script resends the message. In the case of the script and our application we see that as many messages as we expected were sent correctly. |
We performed many failover tests using that script and we haven't observed any message loss. Only in this case when we restarted bookies and brokers in same time we observed something like that. |
Which Pulsar client do you use? When you say "sent correctly", how do you define that? |
We use pulsar client version: 3.2.2 (org.apache.pulsar:pulsar-client-api:3.2.2). When I say that message was sent correctly I mean that we use something like this:
so we are creating message based on websocket request, then we use sendAsync() and we use thenApply() to return ok response. When any exception appears then we use exceptionallyCompose() and we return error response. |
Here is our configuration for pulsar client (in comments I put values):
and also configuration for producer:
|
@szkoludasebastian This looks correct. |
Another point of view is to say that the messages aren't delivered to the consumer in your test scenario. It's about the same as message loss from your application perspective, but there's a subtle difference. Would you be able to check if the messages are stored ok and would be available for delivery on another subscription and consumer that is started after this failure scenario. It's possible that the message loss happens in delivery on the consumer side. What subscription type are you using? How do you handle acknowledgements? When the problem happens, please share the internal stats for the topic and the subscription (use |
So we are using
So we have these two methods. One for acknowledge messages and second one for negative ack. We consume messages from topic and store them in batches. When it's time to flush batch and it is correctly stored in directory which we want then we are using this method to acknowledge messages by id. When there is some error with storing data in desired directory then we negatively acknowledge messages.
|
I've added new subscription and consumer but there was no messages:
|
This isn't really about this bug, but since it came up, I'll comment about this. Is there a specific reason to use |
Thanks. One question about the stats: did you capture these immediately after the problem occured? or is this simply the current state of the system you have? |
There's a high chance that Key_Shared subscription type is contributing to the problem so comparing with Failover subscription type in your use case would also be useful. Since you have a large number of partitions, I believe it would be the correct solution to use Failover subscription type instead of Key_shared. It will provide similar ordering guarantees as Key_shared. Please test if you can reproduce the issue with Failover subscription type. |
We are using |
Failover subscription type will also ensure that messages with the same key will be delivered to a single consumer. Since you have 100 partitions, there's no need to use Key_shared. Please retest your test case with Failover subscription type to see if the possible bug is caused by Key_shared implementation. That will be valuable information. |
ok I will test that, but in failover subscription only one consumer is actively consuming messages? If so then it will have big impact on our performance. |
Yes. Since you have 100 partitions, it is not a problem for you. Failover subscription type contains a solution so that when there are multiple connected consumers connected to all partitions, they will be assigned evenly across all connected consumers. The end result in using Failover subscriptions with 100 partitions is similar as using Key_Shared subscriptions, all connected consumers will be used and you can add more consumers as long as you don't have more than 100 consumers. In your case, I don't see a reason why it would negatively impact performance. |
This is another reason to use Failover subscription so that KEY_BASED batching wouldn't be needed. A multi-topic producer will automatically route keyed messages to a single partition and allow batching of all messages in that partition. With high cardinality keys, you would need a huge throughput to reach reasonable batch sizes when KEY_BASED batching is used. |
I made my test scenario with I need to add here that during this test when last batch of data is waiting for flush to directory I'm also restarting our application instances with consumers. So I'm restarting all bookies, brokers and also our application instances. |
great. that ensures that it's not a Key_Shared subscription originated problem.
Does the application logic ensure that the writing to disk has fully completed and the file has been closed before it acknowledges the messages? Just confirming to rule out any bugs in the application logic. |
There's currently 3.2.3-candidate-1 and 3.0.5-candidate-1 releases available for testing. Do you have a chance to test with either one of those versions? |
To be more accurate, we are writing to aws s3, so we get response from s3 when data is saved correctly and if so then we acknowledge message. |
Pulsar 3.2.3 and 3.0.5 have been released. Would you be able to test with either version? Please also make sure to upgrade the clients, just to be sure that everything has been tested with this level. |
Yes I will try with 3.2.3 |
Do you have a change to test the 3.2.3 images available at |
As I said before, I built image using this |
@szkoludasebastian do you have a chance to isolate a reproducer for this? Please take a look at https://github.com/lhotari/pulsar-playground/tree/master/issues/issue22601/standalone_env and https://github.com/lhotari/pulsar-playground/tree/master/issues/issue22601 for some examples of how a reproducer could be built and shared. It would be helpful to share more details of the configuration. For example, in issue #22601, one of the key details is that TLS is used between brokers and bookies. There's currently a problem in bookkeeper so that using the default setting @szkoludasebastian are you have to share the broker configuration differences compared to the default configuration of Pulsar? What type of deployment do you have? When the problem occurs, do you find any exceptions in the broker or bookie logs? |
here we have our broker configuration which we set by config map:
|
@szkoludasebastian Do you have TLS enabled between Broker and Bookies? |
Right now I'm testing my scenario with this property turned to false: |
I run my test scenario 5 times and I was able to get message loss at last attempt. About 4,500 messages were lost.
|
@szkoludasebastian Would you be able to contribute a reproducer app? You could use https://github.com/lhotari/pulsar-playground/blob/master/src/main/java/com/github/lhotari/pulsar/playground/TestScenarioIssueRedeliveries.java (related to #21767) as a template. There are also other type of reproducer examples in this repository, such as https://github.com/lhotari/pulsar-playground/tree/master/issues/issue22601/standalone_env or https://github.com/lhotari/pulsar-playground/tree/master/issues/issue22601. |
hey @lhotari, we'll contribute in that app to help to reproduce it and we'll let you know when we have something to share |
@PatrykWitkowski thanks, that will be helpful |
@PatrykWitkowski @szkoludasebastian Any updates on the reproducer app? |
@PatrykWitkowski @szkoludasebastian Have you made progress in reproducing this issue? |
This is a sign that consuming is blocked, at least temporarily.The might be related to #21199 |
Also here |
I'd recommend following this advice as mitigation: #22709 (comment) |
I'll rename the title from "Message loss" to "Acknowledgements lost". Technically the messages aren't lost, it's the acknowledgements that are lost. This can currently result in a situation where the messages don't get delivered to the client without reconnecting or unloading the topic. |
Analysis: In addition to addressing the lost acks issue, there's also a need for PIP-282 changes #21953 and other PRs #23226 (merged) and #23231 (in-progress). There are multiple improvements in progress to cover fix this issue. |
I have created a proposal "PIP-377: Automatic retry for failed acknowledgements", #23267 (rendered doc) . Discussion thread https://lists.apache.org/thread/7sg7hfv9dyxto36dr8kotghtksy1j0kr |
Thank you for your commitment to analyze this problem. Do you perhaps know when these fixes and proposed change will be available for testing? |
There's no published timeline yet. Since there are multiple related issues, I'm planning to create a GitHub project which will make it easier to follow the progress of individual issues. It's possible that PIP-377 solution isn't required eventually since there could be a way to make improvements so that Key_shared subscriptions wouldn't end up losing acknowledgements during bookie and broker restarts. Individual reproducer applications or instructions would be useful since they could help validate the solutions along the way. |
Unfortunately not much progress in this area. I will inform here about the progress |
Hi @lhotari Some time ago we proceeded to create an application that will allow us to reproduce this error. However, it turned out that we are not able to do this. After thorough analysis, we determined that the problem was in our service. Our deduplication mechanism was not properly implemented, which led to a situation that messages which should not have been acked, were acked after restarting our service. The problem was in our cache implementation. Multiple attempts to restart bookies and brokers by themselves further confirmed that no message was lost. Thank you for your commitment and help. Ticket can be closed. |
Thanks for confirming, @dominikkulik . I'll close this issue. |
Search before asking
Read release policy
Version
Client version: 3.2.2, Server version: 3.2.2
On previous version also notice same behaviour, e.g. 3.1.0, 3.1.2
Minimal reproduce step
Noticed messages loss when we restart all bookies and brokers during processing the data, so data is send to some topic and then our application consumes messages from topic saves msg payload somewhere and acknowledges messages. To be more precise here are steps:
What did you expect to see?
No message loss
What did you see instead?
Some messages are lost. So when we send 1000000 messages, in directory where we store messages we see less than 1000000. We can't specify here how much less, because it is a very random situation. Sometimes we have all the messages, but sometimes something is missing.
Anything else?
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: