You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@OverridepublicvoidprocessElement(StreamRecord<Row> element) throwsException {
Rowrow = element.getValue();
if (buffer.isFull()) { // only judge rows sizeflushRows();
}
booleanadded = buffer.add(row);
if (!added && !sinkOptions.isDeduplicate()) {
thrownewIllegalStateException(
"Duplicate index in one batch, please enable deduplicate, row = " + row);
}
}
The issue:
Set default commit time,for example: tikv.sink.max.wait.ms: 5000
When checking the number of rows each time, it is judged whether the current time has expired. If the number of rows has not been reached, but the time has been reached, flush rows
Separately judge whether to process overtime, set up a single consumption pipeline, regularly check whether the time is overtime, and flush it when the time is reached.
The text was updated successfully, but these errors were encountered:
Enhancement
When I start use flink-tidb-connector-1.14 to sink data to TiDB, refer to README_unified_batch_streaming.md
But insert data Too little,only three rows and
tikv.sink.buffer-size
default1000
,So can't trigger flush rows.Code block: TiDBWriteOperator
The issue:
Set default commit time,for example:
tikv.sink.max.wait.ms
:5000
The text was updated successfully, but these errors were encountered: