You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Delta Kernel Default Engine's parquet writer accepts FilteredColumnarBatch. Is there any plan to have a Row based parquet writer?
For example org.apache.parquet.hadoop.ParquetWriter writes row by row and flushes the data to parquet file if a threshold is reached(rowGroupSize). This setting is to make sure we don't hit out of JVM heap space. Is there any similar mechanism while using Delta Kernel's Default parquet writer?
Further details
As a workaround I tried to use org.apache.parquet.hadoop.ParquetWriter to write to parquet files and use Delta Kernel to commit. But to use apache ParquetWriter we have to convert StructType to MessageType for which ParquetSchemaUtils.toParquetSchema needs to be exposed publicly. Delta-standalone has ParquetSchemaConverter but kernel doesn't have a publicly accessible converter.
Willingness to contribute
The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?
Yes. I can contribute this feature independently.
Yes. I would be willing to contribute this feature with guidance from the Delta Lake community.
No. I cannot contribute this feature at this time.
The text was updated successfully, but these errors were encountered:
@Sandy3094 We have a config that you can pass in the Configuration object used in creating DefaultEngine.
delta.kernel.default.parquet.writer.targetMaxFileSize. Currently this is kind of private. If this is what you need, we can document this on the DefaultParquetHandler and DefaultEngine docs. Let me know.
Feature request
Which Delta project/connector is this regarding?
Overview
Delta Kernel Default Engine's parquet writer accepts FilteredColumnarBatch. Is there any plan to have a Row based parquet writer?
For example org.apache.parquet.hadoop.ParquetWriter writes row by row and flushes the data to parquet file if a threshold is reached(rowGroupSize). This setting is to make sure we don't hit out of JVM heap space. Is there any similar mechanism while using Delta Kernel's Default parquet writer?
Further details
As a workaround I tried to use org.apache.parquet.hadoop.ParquetWriter to write to parquet files and use Delta Kernel to commit. But to use apache ParquetWriter we have to convert StructType to MessageType for which ParquetSchemaUtils.toParquetSchema needs to be exposed publicly. Delta-standalone has ParquetSchemaConverter but kernel doesn't have a publicly accessible converter.
Willingness to contribute
The Delta Lake Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature?
The text was updated successfully, but these errors were encountered: