-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offset is going ahead and we are missing message with lag #1276
Comments
Hi any suggested solution for now? |
actually for now, fetch and commit manually
|
I am also experiencing the same issue and I am unaware of its cause |
@NitinHsharma do you have a reproducer? or is there any factor you saw that causes this to happen more often? |
@nachogiljaldo No it is random. and one more observation i saw today is if i have less consumer pods than partition then single consumer pod is taking multiple partition to read. But it is only consuming single partition continosualy since there is continuoes traffic on the kafka topic. So my 1 partition lag is getting increase till i forcefully add one more consumer pod. |
Just for confirmation, do you think this could be potentially related to rebalances? (i.e. there is a rebalance with a pending async commit that sets it to an offset older than the one you had?), something like this: #1308 |
Yes it could be |
We are using ReadMessage function with Consumer group, which works pretty good. but sometime one of the partition offset is getting set ahead of it commited message/s so those in between messages are getting stuck in kafka. No reader is able to get those message until we are restarting the pods basically forcefully rebalancing the consumer group.
Below are the basic code which we are using to consume the messages
Below are the logs for the same
Now if you see the logs at the end, it library has commited 79 offset on partition no 5 but somehow it moved to 80. which is causing this lag at kafka with 1 message.
The text was updated successfully, but these errors were encountered: