Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec: add variant type #10831

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 25 additions & 2 deletions format/spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,21 @@ A **`list`** is a collection of values with some element type. The element field

A **`map`** is a collection of key-value pairs with a key type and a value type. Both the key field and value field each have an integer id that is unique in the table schema. Map keys are required and map values can be either optional or required. Both map keys and map values may be any type, including nested types.

#### Semi-structured Types

A **`variant`** is a value that stores semi-structured data. The structure and data types in a variant are not necessarily consistent across rows in a table or data file. The variant type and binary encoding are defined in the [Parquet project](https://github.com/apache/parquet-format/blob/4f208158dba80ff4bff4afaa4441d7270103dff6/VariantEncoding.md). Support for Variant is added in Iceberg v3.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This link should be to main/master rather than a specific sha right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I worried about, since aren't we syncing with a specific iteration of the file?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically we link to a specific version to be implemented in Iceberg. Then later e.g., when we add additional data types, we should also update here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think linking to a specific version of the file is not very clear on what is intended. We should be very specific in this doc what parts are intended for support in iceberg v3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't want to duplicate the content the actual spec in Parquet. Basically what mentioned in the parquet spec should be included.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be more specific I think we should be saying only V1 of the Parquet variant spec is included (i.e. not try to address by specific link but by a specific version from parquet).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree that eventually I think we would do that when we start to release that in Parquet. Since it's in progress right now, I will link like this and will update later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the linked document

Important

This specification is still under active development, and has not been formally adopted.

assuming Parquet-level spec if subject to change, what are the conditions to release Iceberg 3 with variant support in the spec?

Secondly, the linked document talks about shredding.
How does this interact with Iceberg field IDs in the parquet metadata?
Do all the columns share field ID, or is only the first column supposed to be annotated with the field ID?
Let's make it explicit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From previous discussion, the community is interested in both basic Variant type support and shredding for better performance. I can see basic variant encoding is settled - we could add additional types; I think we need finalize the shredding spec so the encoding doesn't change.

Regarding shredding, are you referring to shredded subcolumns from a Variant? I'm thinking that we can clarify in shredding spec (probably after https://github.com/apache/parquet-format/pull/461/files#diff-95f43ac21fdadae78c95da23444ed7a4036a4993e9faa2ee5d8b2c29ef6d8056). The top variant column has the field ID and the subcolumns are accessed through the path like location.lattitude.

Secondly, the linked document talks about shredding.
How does this interact with Iceberg field IDs in the parquet metadata?
Do all the columns share field ID, or is only the first column supposed to be annotated with the field ID?
Let's make it explicit.


Variants are similar to JSON with a wider set of primitive values including date, timestamp, timestamptz, binary, and decimals.

Variant values may contain nested types:
1. An array is an ordered collection of variant values.
2. An object is a collection of fields that are a string key and a variant value.

As a semi-structured type, there are important differences between variant and Iceberg's other types:
1. Variant arrays are similar to lists, but may contain any variant value rather than a fixed element type.
2. Variant objects are similar to structs, but may contain variable fields identified by name and field values may be any variant value rather than a fixed field type.
3. Variant primitives are narrower than Iceberg's primitive types: time, timestamp_ns, timestamptz_ns, uuid, and fixed(L) are not supported.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I imagine the parquet changes to the variant spec would be merged before we release v3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you talking about the Variant spec change apache/parquet-format#461 and apache/parquet-format#464? I think we will.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, mostly 464. Is there a reason not to support fixed(L)? I suppose it is redundant with string?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. We can use string instead. Also, we need to think of how to represent fixed(L) of length L if we want to support it. We need to encode L in the type if we don't lose such information while our type field only has 5 bits.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for Fixed(L) using the same representation of String works (there can be multiple L encoded in a variant), so size == L this should give enough flexibility (so the type would just be "Fixed") I suppose one could get more complex then this but I'm not sure there is value given how things are currently modelled.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming that for regular fixed(L) data type, when we insert a string, then it will truncate to length L if it's too long.

If we don't have the length L, then you mean we just store the original strings which could be longer than the defined L? Then I guess we are not actually supporting, e.g., fixed(16) type, which what I assume the expected behavior is: truncate string to 16 bytes when we store the string and return fixed(16) when we read back.

If the type is fixed only, then we will not read back/write to as fixed(16)? Let me know if I misunderstand.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming that for regular fixed(L) data type, when we insert a string, then it will truncate to length L if it's too long.

Yes, i think this might be a different assumptions. My assumption is that engines pass in valid Fixed(L) (i.e. length of this type are always exactly L). Their would be no truncation should happen at the storage layer.

So the encoding would be <value header fixed><L as int32 (maybe int16 is sufficient)><string value>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I think that works. Even with truncation, that should happen on the engine side.

We can consider adding that to the spec if there is a need.


#### Primitive Types

Supported primitive types are defined in the table below. Primitive types added after v1 have an "added by" version that is the first spec version in which the type is allowed. For example, nanosecond-precision timestamps are part of the v3 spec; using v3 types in v1 or v2 tables can break forward compatibility.
Expand Down Expand Up @@ -449,7 +464,7 @@ Partition field IDs must be reused if an existing partition spec contains an equ

| Transform name | Description | Source types | Result type |
|-------------------|--------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|-------------|
| **`identity`** | Source value, unmodified | Any | Source type |
| **`identity`** | Source value, unmodified | Any except for `variant` | Source type |
| **`bucket[N]`** | Hash of value, mod `N` (see below) | `int`, `long`, `decimal`, `date`, `time`, `timestamp`, `timestamptz`, `timestamp_ns`, `timestamptz_ns`, `string`, `uuid`, `fixed`, `binary` | `int` |
| **`truncate[W]`** | Value truncated to width `W` (see below) | `int`, `long`, `decimal`, `string`, `binary` | Source type |
| **`year`** | Extract a date or timestamp year, as years from 1970 | `date`, `timestamp`, `timestamptz`, `timestamp_ns`, `timestamptz_ns` | `int` |
Expand Down Expand Up @@ -1154,6 +1169,7 @@ Maps with non-string keys must use an array representation with the `map` logica
|**`struct`**|`record`||
|**`list`**|`array`||
|**`map`**|`array` of key-value records, or `map` when keys are strings (optional).|Array storage must use logical type name `map` and must store elements that are 2-field records. The first field is a non-null key and the second field is the value.|
|**`variant`**|`record` with `metadata` and `value` fields. `metadata` and `value` must not be assigned field IDs and the fields are accessed through names. |Shredding is not supported in Avro.|

Notes:

Expand Down Expand Up @@ -1208,6 +1224,7 @@ Lists must use the [3-level representation](https://github.com/apache/parquet-fo
| **`struct`** | `group` | | |
| **`list`** | `3-level list` | `LIST` | See Parquet docs for 3-level representation. |
| **`map`** | `3-level map` | `MAP` | See Parquet docs for 3-level representation. |
| **`variant`** | `group` with `metadata` and `value` fields. `metadata` and `value` must not be assigned field IDs and the fields are accessed through names.| `VARIANT` | See Parquet docs for [Variant encoding](https://github.com/apache/parquet-format/blob/4f208158dba80ff4bff4afaa4441d7270103dff6/VariantEncoding.md) and [Variant shredding encoding](https://github.com/apache/parquet-format/blob/4f208158dba80ff4bff4afaa4441d7270103dff6/VariantShreddingEncoding.md). |


When reading an `unknown` column, any corresponding column must be ignored and replaced with `null` values.
Expand Down Expand Up @@ -1239,6 +1256,7 @@ When reading an `unknown` column, any corresponding column must be ignored and r
| **`struct`** | `struct` | | |
| **`list`** | `array` | | |
| **`map`** | `map` | | |
| **`variant`** | `struct` with `metadata` and `value` fields. `metadata` and `value` must not be assigned field IDs. | `iceberg.struct-type`=`VARIANT` | Shredding is not supported in ORC. |

Notes:

Expand Down Expand Up @@ -1285,6 +1303,8 @@ The types below are not currently valid for bucketing, and so are not hashed. Ho
| **`float`** | `hashLong(doubleToLongBits(double(v))` [5]| `1.0F` → `-142385009`, `0.0F` → `1669671676`, `-0.0F` → `1669671676` |
| **`double`** | `hashLong(doubleToLongBits(v))` [5]| `1.0D` → `-142385009`, `0.0D` → `1669671676`, `-0.0D` → `1669671676` |

A 32-bit hash is not defined for `variant` because there are multiple representations for equivalent values.

Notes:

1. Integer and long hash results must be identical for all integer values. This ensures that schema evolution does not change bucket partition values if integer types are promoted.
Expand Down Expand Up @@ -1331,6 +1351,7 @@ Types are serialized according to this table:
|**`struct`**|`JSON object: {`<br />&nbsp;&nbsp;`"type": "struct",`<br />&nbsp;&nbsp;`"fields": [ {`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"id": <field id int>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"name": <name string>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"required": <boolean>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"type": <type JSON>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"doc": <comment string>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"initial-default": <JSON encoding of default value>,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"write-default": <JSON encoding of default value>`<br />&nbsp;&nbsp;&nbsp;&nbsp;`}, ...`<br />&nbsp;&nbsp;`] }`|`{`<br />&nbsp;&nbsp;`"type": "struct",`<br />&nbsp;&nbsp;`"fields": [ {`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"id": 1,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"name": "id",`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"required": true,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"type": "uuid",`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"initial-default": "0db3e2a8-9d1d-42b9-aa7b-74ebe558dceb",`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"write-default": "ec5911be-b0a7-458c-8438-c9a3e53cffae"`<br />&nbsp;&nbsp;`}, {`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"id": 2,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"name": "data",`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"required": false,`<br />&nbsp;&nbsp;&nbsp;&nbsp;`"type": {`<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`"type": "list",`<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;`...`<br />&nbsp;&nbsp;&nbsp;&nbsp;`}`<br />&nbsp;&nbsp;`} ]`<br />`}`|
|**`list`**|`JSON object: {`<br />&nbsp;&nbsp;`"type": "list",`<br />&nbsp;&nbsp;`"element-id": <id int>,`<br />&nbsp;&nbsp;`"element-required": <bool>`<br />&nbsp;&nbsp;`"element": <type JSON>`<br />`}`|`{`<br />&nbsp;&nbsp;`"type": "list",`<br />&nbsp;&nbsp;`"element-id": 3,`<br />&nbsp;&nbsp;`"element-required": true,`<br />&nbsp;&nbsp;`"element": "string"`<br />`}`|
|**`map`**|`JSON object: {`<br />&nbsp;&nbsp;`"type": "map",`<br />&nbsp;&nbsp;`"key-id": <key id int>,`<br />&nbsp;&nbsp;`"key": <type JSON>,`<br />&nbsp;&nbsp;`"value-id": <val id int>,`<br />&nbsp;&nbsp;`"value-required": <bool>`<br />&nbsp;&nbsp;`"value": <type JSON>`<br />`}`|`{`<br />&nbsp;&nbsp;`"type": "map",`<br />&nbsp;&nbsp;`"key-id": 4,`<br />&nbsp;&nbsp;`"key": "string",`<br />&nbsp;&nbsp;`"value-id": 5,`<br />&nbsp;&nbsp;`"value-required": false,`<br />&nbsp;&nbsp;`"value": "double"`<br />`}`|
| **`variant`**| `JSON string: "variant"`|`"variant"`|
aihuaxu marked this conversation as resolved.
Show resolved Hide resolved

Note that default values are serialized using the JSON single-value serialization in [Appendix D](#appendix-d-single-value-serialization).

Expand Down Expand Up @@ -1480,6 +1501,7 @@ This serialization scheme is for storing single values as individual binary valu
| **`struct`** | Not supported |
| **`list`** | Not supported |
| **`map`** | Not supported |
| **`variant`** | Not supported |
Copy link
Member

@RussellSpitzer RussellSpitzer Nov 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree this should be not-supported for now. Then when shredding is included say something like for Shredded variants only, binary value concatenation of metadata and value + separator byte or something. We can figure that out with the shredding addition though

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't include Variant here, then we don't need to include it in the JSON section either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems binary representation is used for lower bound and upper bound and JSON single-value serialization is used for default value. Looks like they are defined independently? But since there is no default value for Variant, i will remove from JSON section.


### JSON single-value serialization

Expand Down Expand Up @@ -1508,6 +1530,7 @@ This serialization scheme is for storing single values as individual binary valu
| **`map`** | **`JSON object of key and value arrays`** | `{ "keys": ["a", "b"], "values": [1, 2] }` | Stores arrays of keys and values; individual keys and values are serialized using this JSON single-value format |



## Appendix E: Format version changes

### Version 3
Expand All @@ -1517,7 +1540,7 @@ Default values are added to struct fields in v3.
* The `write-default` is a forward-compatible change because it is only used at write time. Old writers will fail because the field is missing.
* Tables with `initial-default` will be read correctly by older readers if `initial-default` is always null for optional fields. Otherwise, old readers will default optional columns with null. Old readers will fail to read required fields which are populated by `initial-default` because that default is not supported.

Types `unknown`, `timestamp_ns`, and `timestamptz_ns` are added in v3.
Types `variant`, `unknown`, `timestamp_ns`, and `timestamptz_ns` are added in v3.

All readers are required to read tables with unknown partition transforms, ignoring the unsupported partition fields when filtering.

Expand Down
1 change: 1 addition & 0 deletions version.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
snowflake-iceberg-rc