Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify Variant shredding and refactor for clarity #461

Open
wants to merge 14 commits into
base: master
Choose a base branch
from
102 changes: 76 additions & 26 deletions VariantEncoding.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,41 @@ Another motivation for the representation is that (aside from metadata) each nes
For example, in a Variant containing an Array of Variant values, the representation of an inner Variant value, when paired with the metadata of the full variant, is itself a valid Variant.

This document describes the Variant Binary Encoding scheme.
[VariantShredding.md](VariantShredding.md) describes the details of the Variant shredding scheme.
The [Variant Shredding specification](VariantShredding.md) describes the details of shredding Variant values as typed Parquet columns.
emkornfield marked this conversation as resolved.
Show resolved Hide resolved

## Variant in Parquet

# Variant in Parquet
A Variant value in Parquet is represented by a group with 2 fields, named `value` and `metadata`.
Both fields `value` and `metadata` are of type `binary`, and cannot be `null`.

# Metadata encoding
* The Variant group must be annotated with the `VARIANT` logical type.
rdblue marked this conversation as resolved.
Show resolved Hide resolved
* Both fields `value` and `metadata` must be of type `binary` (called `BYTE_ARRAY` in the Parquet thrift definition).
rdblue marked this conversation as resolved.
Show resolved Hide resolved
* The `metadata` field is required and must be a valid Variant metadata, as defined below.
* The `value` field is required for unshredded Variant values.
rdblue marked this conversation as resolved.
Show resolved Hide resolved
* The `value` field is optional when parts of the Variant value are shredded according to the [Variant Shredding specification](VariantShredding.md).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* The `value` field is optional when parts of the Variant value are shredded according to the [Variant Shredding specification](VariantShredding.md).
* The `value` field is optional when all parts of the Variant value are shredded according to the [Variant Shredding specification](VariantShredding.md).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wgtmac, I don't think this makes sense. A Variant value may have any structure. We cannot know ahead of time (when setting the Parquet type) that all parts of all encoded Variant values will be shredded.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the confusion might be around what this is intended to mean. Whether the column can be left out entirely, or whether we are talking about the cardinality specifier of (required/optional/repeated). Maybe we can clarify this as saying.

If so we can maybe clarify that the field maybe be annotated as required or optional. When it is annoted as required there is no shredding. It only optional to when shredding might be used?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(or maybe clarify that field must always be present but the cardinality changes depending on if the field is shredded).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated this to make it clear that this is referring to the repetition level. There are also examples, so I think that it is unambiguous.

* When present, the `value` field must be a valid Variant value, as defined below.

This is the expected unshredded representation in Parquet:

```
optional group variant_name (VARIANT) {
required binary metadata;
required binary value;
}
```

This is an example representation of a shredded Variant in Parquet:
emkornfield marked this conversation as resolved.
Show resolved Hide resolved
```
optional group shredded_variant_name (VARIANT) {
required binary metadata;
optional binary value;
rdblue marked this conversation as resolved.
Show resolved Hide resolved
optional int64 typed_value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
optional int64 typed_value;
// The exact semantics of this field are discussed in detail below, but this column stores the variant value when it is an integer.
optional int64 typed_value;

}
```

The `VARIANT` annotation places no additional restrictions on the repetition of Variant groups, but repetition may be restricted by containing types (such as `MAP` and `LIST`).
emkornfield marked this conversation as resolved.
Show resolved Hide resolved
The Variant group name is the name of the Variant column.

## Metadata encoding

The encoded metadata always starts with a header byte.
```
Expand Down Expand Up @@ -95,7 +123,7 @@ The first `offset` value will always be `0`, and the last `offset` value will al
The last part of the metadata is `bytes`, which stores all the string values in the dictionary.
All string values must be UTF-8 encoded strings.

## Metadata encoding grammar
### Metadata encoding grammar
Fokko marked this conversation as resolved.
Show resolved Hide resolved

The grammar for encoded metadata is as follows

Expand All @@ -119,7 +147,7 @@ Notes:
- If `sorted_strings` is set to 1, strings in the dictionary must be unique and sorted in lexicographic order. If the value is set to 0, readers may not make any assumptions about string order or uniqueness.


# Value encoding
## Value encoding

The entire encoded Variant value includes the `value_metadata` byte, and then 0 or more bytes for the `val`.
```
Expand All @@ -132,16 +160,16 @@ value | value_header | basic_type |
| |
+-------------------------------------------------+
```
## Basic Type
### Basic Type

The `basic_type` is 2-bit value that represents which basic type the Variant value is.
The [basic types table](#encoding-types) shows what each value represents.

## Value Header
### Value Header

The `value_header` is a 6-bit value that contains more information about the type, and the format depends on the `basic_type`.

### Value Header for Primitive type (`basic_type`=0)
#### Value Header for Primitive type (`basic_type`=0)

When `basic_type` is `0`, `value_header` is a 6-bit `primitive_header`.
The [primitive types table](#encoding-types) shows what each value represents.
Expand All @@ -152,7 +180,7 @@ value_header | primitive_header |
+-----------------------+
```

### Value Header for Short string (`basic_type`=1)
#### Value Header for Short string (`basic_type`=1)

When `basic_type` is `1`, `value_header` is a 6-bit `short_string_header`.
```
Expand All @@ -163,7 +191,7 @@ value_header | short_string_header |
```
The `short_string_header` value is the length of the string.

### Value Header for Object (`basic_type`=2)
#### Value Header for Object (`basic_type`=2)

When `basic_type` is `2`, `value_header` is made up of `field_offset_size_minus_one`, `field_id_size_minus_one`, and `is_large`.
```
Expand All @@ -181,7 +209,7 @@ The actual number of bytes is computed as `field_offset_size_minus_one + 1` and
`is_large` is a 1-bit value that indicates how many bytes are used to encode the number of elements.
If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are used.

### Value Header for Array (`basic_type`=3)
#### Value Header for Array (`basic_type`=3)

When `basic_type` is `3`, `value_header` is made up of `field_offset_size_minus_one`, and `is_large`.
```
Expand All @@ -198,21 +226,21 @@ The actual number of bytes is computed as `field_offset_size_minus_one + 1`.
`is_large` is a 1-bit value that indicates how many bytes are used to encode the number of elements.
If `is_large` is `0`, 1 byte is used, and if `is_large` is `1`, 4 bytes are used.

## Value Data
### Value Data

The `value_data` encoding format depends on the type specified by `value_metadata`.
For some types, the `value_data` will be 0-bytes.

### Value Data for Primitive type (`basic_type`=0)
#### Value Data for Primitive type (`basic_type`=0)

When `basic_type` is `0`, `value_data` depends on the `primitive_header` value.
The [primitive types table](#encoding-types) shows the encoding format for each primitive type.

### Value Data for Short string (`basic_type`=1)
#### Value Data for Short string (`basic_type`=1)

When `basic_type` is `1`, `value_data` is the sequence of UTF-8 encoded bytes that represents the string.

### Value Data for Object (`basic_type`=2)
#### Value Data for Object (`basic_type`=2)

When `basic_type` is `2`, `value_data` encodes an object.
The encoding format is shown in the following diagram:
Expand Down Expand Up @@ -282,7 +310,7 @@ The `field_id` list must be `[<id for key "a">, <id for key "b">, <id for key "c
The `field_offset` list must be `[<offset for value 1>, <offset for value 2>, <offset for value 3>, <last offset>]`.
The `value` list can be in any order.

### Value Data for Array (`basic_type`=3)
#### Value Data for Array (`basic_type`=3)

When `basic_type` is `3`, `value_data` encodes an array. The encoding format is shown in the following diagram:
```
Expand Down Expand Up @@ -323,7 +351,7 @@ The `field_offset` list is followed by the `value` list.
There are `num_elements` number of `value` entries and each `value` is an encoded Variant value.
For the i-th array entry, the value is the Variant `value` starting from the i-th `field_offset` byte offset.

## Value encoding grammar
### Value encoding grammar

The grammar for an encoded value is:

Expand Down Expand Up @@ -364,7 +392,7 @@ It is semantically identical to the "string" primitive type.

The Decimal type contains a scale, but no precision. The implied precision of a decimal value is `floor(log_10(val)) + 1`.

# Encoding types
## Encoding types

| Basic Type | ID | Description |
|--------------|-----|---------------------------------------------------|
Expand All @@ -373,9 +401,9 @@ The Decimal type contains a scale, but no precision. The implied precision of a
| Object | `2` | A collection of (string-key, variant-value) pairs |
| Array | `3` | An ordered sequence of variant values |

| Logical Type | Physical Type | Type ID | Equivalent Parquet Type | Binary format |
| Variant Logical Type | Variant Physical Type | Type ID | Equivalent Parquet Type | Binary format |
|----------------------|-----------------------------|---------|-----------------------------|---------------------------------------------------------------------------------------------------------------------|
| NullType | null | `0` | any | none |
| NullType | null | `0` | UNKNOWN | none |
| Boolean | boolean (True) | `1` | BOOLEAN | none |
| Boolean | boolean (False) | `2` | BOOLEAN | none |
| Exact Numeric | int8 | `3` | INT(8, signed) | 1 byte |
Expand Down Expand Up @@ -404,14 +432,14 @@ The *Logical Type* column indicates logical equivalence of physically encoded ty
For example, a user expression operating on a string value containing "hello" should behave the same, whether it is encoded with the short string optimization, or long string encoding.
Similarly, user expressions operating on an *int8* value of 1 should behave the same as a decimal16 with scale 2 and unscaled value 100.

# String values must be UTF-8 encoded
## String values must be UTF-8 encoded

All strings within the Variant binary format must be UTF-8 encoded.
This includes the dictionary key string values, the "short string" values, and the "long string" values.

# Object field ID order and uniqueness
## Object field ID order and uniqueness

For objects, field IDs and offsets must be listed in the order of the corresponding field names, sorted lexicographically.
For objects, field IDs and offsets must be listed in the order of the corresponding field names, sorted lexicographically (using unsigned byte ordering for UTF-8).
Note that the field values themselves are not required to follow this order.
As a result, offsets will not necessarily be listed in ascending order.
The field values are not required to be in the same order as the field IDs, to enable flexibility when constructing Variant values.
Expand All @@ -423,14 +451,36 @@ Field names are case-sensitive.
Field names are required to be unique for each object.
It is an error for an object to contain two fields with the same name, whether or not they have distinct dictionary IDs.

# Versions and extensions
## Versions and extensions

An implementation is not expected to parse a Variant value whose metadata version is higher than the version supported by the implementation.
However, new types may be added to the specification without incrementing the version ID.
In such a situation, an implementation should be able to read the rest of the Variant value if desired.

# Shredding
## Shredding

A single Variant object may have poor read performance when only a small subset of fields are needed.
A better approach is to create separate columns for individual fields, referred to as shredding or subcolumnarization.
[VariantShredding.md](VariantShredding.md) describes the Variant shredding specification in Parquet.

## Conversion to JSON

Values stored in the Variant encoding are a superset of JSON values.
For example, a Variant value can be a date that has no equivalent type in JSON.
To maximize compatibility with readers that can process JSON but not Variant, the following conversions should be used when producing JSON from a Variant:

| Variant type | JSON type | Representation requirements | Example |
|---------------|-----------|----------------------------------------------------------|--------------------------------------|
| Null type | null | `null` | `null` |
| Boolean | boolean | `true` or `false` | `true` |
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, 34.00 |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For exact numerics, we should allow truncating trailing zeros. For example, int8 value 1 and decimal(5,2) value 100 can both be represented as a JSON value 1.

Also, should the example be quoted to stay consistent?

Suggested change
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, 34.00 |
| Exact Numeric | number | Digits in fraction must match scale, no exponent | `34`, `34.00` |

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the intent of considering Exact Numeric to be a single logical type is that we consider the int8 value 1 to be logically equivalent to decimal(5,2) with unscaled value 100. If that's the case, I think we'd want the produced JSON to be the same for both (probably 1 in both cases), and not recommend having the fraction match the scale.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gene-db, @cashmand, these are concerns for the engine layer, not for storage. If Spark wants to automatically coerce between types that's fine, but the compromise that we talked about a couple months ago was to leave this out of the shredding spec and delegate the behavior to engines. Storage should always produce the data that was stored, without modification.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the engine should be the one concerned with changing types.

However, my original question was about this JSON representation wording. Currently, the Representation requirements for an Exact Numeric says the Digits in fraction must match scale. However, because the Exact Numeric is considered a logical type, the value 1 could be stored in the Variant as int8 1 or decimal(5,2) 100. Both of those would be the same numeric value, so we should allow truncating trailing zeros in the JSON representation, instead of requiring that the digits in the fraction match the scale.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gene-db, the JSON representation should match the physical type as closely as possible. The reader can interpret the value however it chooses to, but a storage implementation should not discard the information.

If you want to produce 34 from 34.00 stored as decimal(9, 2) then the engine is responsible for casting the value to int8 and then producing JSON. The JSON representation for the original decimal(9, 2) value is 34.00.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rdblue I am confused with this JSON chart then. If we are talking about "storage implementation", then are you expecting there is a "storage implementation" that is converting variant values to JSON? When will storage convert a variant value to a JSON string?

I originally thought this chart was trying to say, "When an engine wants to convert a variant value to a JSON string, here are the rules". Therefore, we should allow engines to cast integral decimals to integers before converting to JSON, as you already mentioned in your previous comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I agree with @gene-db on this. I think any json representation that has semantically the same meaning in JSON should be allowed. Translation to JSON is inherently lossy and I think trying to match semantics will be more error prone then it is worth (i.e. it should be a non-goal to expect it to be able to reconstruct the exact same variant from the proposed JSON representation).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maybe the wording or presentation of this mapping is a bit confusing.

I think we are on all on the same page of allowing engines to "normalize" the Variant value. For example, I think the Spark implementation already normalizes 1.00 to 1. There are also many optimizations and efficiency aspects with normalization, so we should not disallow that.

Maybe what this chart is trying to show is: "if you want to output a Variant value as a JSON string, this is the output format you should use". So, for numbers, the conversion should be like 1 or 1.23 (no quotes), not "1", or "1.23". If this chart was about the JSON output formatting, would that be more clear?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When an engine wants to convert a variant value to a JSON string, here are the rules

Yes, this is correct. We want a clear way to convert to a JSON string. However, the normalization needs to happen first. We don't want to specify that the JSON must be any more lossy than it already is.

Why would we require an engine to produce a normalized value?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would we require an engine to produce a normalized value?

At least for me, I don't think it is about "requiring" and engine to produce a normalized value first. I think if an engine is reading variant and converting it to JSON, it is possibly doing so through an internal representation so it can still apply operators on top of the JSON value and possibly even storing it as an internal representation. Conversion to a string is really only an end-user visible thing. So when I read this it seems to be requiring an engine to NOT normalize which could be hard to implement for some engines.

| Float | number | Fraction must be present | `14.20` |
rdblue marked this conversation as resolved.
Show resolved Hide resolved
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might be worth noting that relying on the json values here might be lossy for some numbers (this is implied but people still ask questions ...)

| Double | number | Fraction must be present | `1.0` |
| Date | string | ISO-8601 formatted date | `"2017-11-16"` |
| Timestamp | string | ISO-8601 formatted UTC timestamp including +00:00 offset | `"2017-11-16T22:31:08.000001+00:00"` |
| TimestampNTZ | string | ISO-8601 formatted UTC timestamp with no offset or zone | `"2017-11-16T22:31:08.000001"` |
rdblue marked this conversation as resolved.
Show resolved Hide resolved
| Binary | string | Base64 encoded binary | `"dmFyaWFudAo="` |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the other alternative here is a hex encoded string which will always be bigger, base64 seems reasonable just want to call this out in case people have preferences.

| String | string | | `"variant"` |
| Array | array | | `[34, "abc", "2017-11-16]` |
| Object | object | | `{"id": 34, "data": "abc"}` |

Loading