Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KafkaStreamPublisher: Fix ArrayIndexOutOfBoundsException #4517

Merged
merged 2 commits into from
Sep 19, 2023

Conversation

nbauernfeind
Copy link
Member

Fixes #4516

Since we append to these chunks off of the update-graph, we only flush when we need to acquire a new set of chunks. Note that StreamPublisherBase#flush does not check to see if the chunk has .size() > 0 but IntChunkColumnSource.addChunk does ensure that you are not adding empty chunks. (Noting that flush() is what happens first during an update graph cycle for the blink table adapter.)

We attempt to be clever to only allocate a new chunk if we are going to write a row to it. To do this, we pre-decrement a counter remaining and if the chunk is full we flush. Note this check if (--remaining == 0) { is an off-by-one as we have not yet written the row we are counting. That isn't a big deal though; that just means we never actually fill the chunks. The AIOBE actually comes from recalculating the new remaining whenever we flush -- we are not accounting for the row that is about to be written.

This causes us to leave the method with remaining == 1 but the chunks are already full.

@nbauernfeind nbauernfeind merged commit 9405627 into deephaven:main Sep 19, 2023
9 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Sep 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kafka Stream Publisher may throw ArrayIndexOutOfBoundsException
2 participants