KafkaStreamPublisher: Fix ArrayIndexOutOfBoundsException #4517
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #4516
Since we append to these chunks off of the update-graph, we only flush when we need to acquire a new set of chunks. Note that
StreamPublisherBase#flush
does not check to see if the chunk has.size() > 0
butIntChunkColumnSource.addChunk
does ensure that you are not adding empty chunks. (Noting that flush() is what happens first during an update graph cycle for the blink table adapter.)We attempt to be clever to only allocate a new chunk if we are going to write a row to it. To do this, we pre-decrement a counter
remaining
and if the chunk is full we flush. Note this checkif (--remaining == 0) {
is an off-by-one as we have not yet written the row we are counting. That isn't a big deal though; that just means we never actually fill the chunks. TheAIOBE
actually comes from recalculating the newremaining
whenever we flush -- we are not accounting for the row that is about to be written.This causes us to leave the method with
remaining == 1
but the chunks are already full.