You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've run into an issue where my app is consuming a finite number of records from a topic, does some processing, then commits the offsets. The issue is if I use call take(...) on the record stream, and then through(commitBatchWithin(...)) the commits time out. If I flip the order, it works. Not sure what's happening here exactly, but my guess is that the consumer closes it's connection and underlying Java consumer cannot finish the commit and just hangs.
The above code throws fs2.kafka.CommitTimeoutException, but take(1) is moved below .through(commitBatchWithin(1, 1.second)), it finishes, and commits offset.
Hello,
I've run into an issue where my app is consuming a finite number of records from a topic, does some processing, then commits the offsets. The issue is if I use call
take(...)
on the record stream, and thenthrough(commitBatchWithin(...))
the commits time out. If I flip the order, it works. Not sure what's happening here exactly, but my guess is that the consumer closes it's connection and underlying Java consumer cannot finish the commit and just hangs.Scala version: 2.12.18
fs2.kafka version: 3.1.0
Some example code snippet:
The above code throws
fs2.kafka.CommitTimeoutException
, buttake(1)
is moved below.through(commitBatchWithin(1, 1.second))
, it finishes, and commits offset.P.S.: this other issue might be related #1293
The text was updated successfully, but these errors were encountered: