You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
4.use the datagen table's data insert into test_tb
insert into test_tb select* from datagen_table
What doesn't meet your expectations?
org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: Failed to send data to Kafka: Partition 5 of topic test_topic with partition count 1 is not present in metadata after 60000 ms.
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1428)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:859)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:99)
at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.invoke(TwoPhaseCommitSinkFunction.java:240)
at org.apache.paimon.flink.sink.RowDataStoreWriteOperator.processElement(RowDataStoreWriteOperator.java:136)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:237)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.processElement(AbstractStreamTaskNetworkInput.java:146)
at org.apache.flink.streaming.runtime.io.AbstractStreamTaskNetworkInput.emitNext(AbstractStreamTaskNetworkInput.java:110)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.kafka.shaded.org.apache.kafka.common.errors.TimeoutException: Partition 5 of topic test_topic with partition count 1 is not present in metadata after 60000 ms.
The bucket number of the paimon table is 10, but the partition number of the Kakfa topic is 1, which causes the data to be sent to a non-existent partition when it is sent to Kafka.
Anything else?
In this code, the partitions of the data produced by Kafka are obtained based on the bucket. Does this mean that the number of partitions of Kafka must be consistent with the number of buckets in the table? In our production scenario, the number of partitions of Kafka is always the default
Are you willing to submit a PR?
I'm willing to submit a PR!
The text was updated successfully, but these errors were encountered:
Search before asking
Paimon version
paimon-1.0-snapshot
Compute Engine
flink 1.18.0
Minimal reproduce step
1.create an paimon catalog and use kafka as the logsystem.
2.create an primary table and use kafka as the logsystem.
3.create an datagen table
4.use the datagen table's data insert into test_tb
What doesn't meet your expectations?
The bucket number of the paimon table is 10, but the partition number of the Kakfa topic is 1, which causes the data to be sent to a non-existent partition when it is sent to Kafka.
Anything else?
In this code, the partitions of the data produced by Kafka are obtained based on the bucket. Does this mean that the number of partitions of Kafka must be consistent with the number of buckets in the table? In our production scenario, the number of partitions of Kafka is always the defaultAre you willing to submit a PR?
The text was updated successfully, but these errors were encountered: