Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support refreshing Iceberg tables #5707

Open
wants to merge 40 commits into
base: main
Choose a base branch
from

Conversation

lbooker42
Copy link
Contributor

@lbooker42 lbooker42 commented Jul 2, 2024

Add two methods of refreshing tables:

  • Manual refreshing - user specifies which snapshot to load and the engine will parse the snapshot to add/remove Iceberg data files as needed and notify downstream tables of the changes
  • Auto refreshing - at regular intervals (user configurable) the engine will query Iceberg for the latest snapshot and then parse and load

Example code:

Java automatic and manually refreshing tables

import io.deephaven.iceberg.util.*;
import org.apache.iceberg.catalog.*;

adapter = IcebergToolsS3.createS3Rest(
        "minio-iceberg",
        "http://rest:8181",
        "s3a://warehouse/wh",
        "us-east-1",
        "admin",
        "password",
        "http://minio:9000");

//////////////////////////////////////////////////////////////////////

import io.deephaven.extensions.s3.*;

s3_instructions = S3Instructions.builder()
    .regionName("us-east-1")
    .credentials(Credentials.basic("admin", "password"))
    .endpointOverride("http://minio:9000")
    .build()

import io.deephaven.iceberg.util.IcebergUpdateMode;

// Automatic refreshing every 1 second 
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.autoRefreshing(1_000L))
    .build()

// Automatic refreshing (default 60 seconds)
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.AUTO_REFRESHING)
    .build()

// Load the table and monitor changes
sales_multi = adapter.readTable(
        "sales.sales_multi",
        iceberg_instructions)

//////////////////////////////////////////////////////////////////////

// Manual refreshing
iceberg_instructions = IcebergInstructions.builder()
    .dataInstructions(s3_instructions)
    .updateMode(IcebergUpdateMode.MANUAL_REFRESHING)
    .build()

// Load a table with a specific snapshot
sales_multi = adapter.readTable(
        "sales.sales_multi",
        5120804857276751995,
        iceberg_instructions)

// Update the table to a specific snapshot
sales_multi.update(848129305390678414)

// Update to the latest snapshot
sales_multi.update()

Python automatic and manually refreshing tables

from deephaven.experimental import s3, iceberg

local_adapter = iceberg.adapter_s3_rest(
        name="minio-iceberg",
        catalog_uri="http://rest:8181",
        warehouse_location="s3a://warehouse/wh",
        region_name="us-east-1",
        access_key_id="admin",
        secret_access_key="password",
        end_point_override="http://minio:9000");

#################################################

s3_instructions = s3.S3Instructions(
        region_name="us-east-1",
        access_key_id="admin",
        secret_access_key="password",
        endpoint_override="http://minio:9000"
        )

# Auto-refresh every 1000 ms
iceberg_instructions = iceberg.IcebergInstructions(
        data_instructions=s3_instructions,
        update_mode=iceberg.IcebergUpdateMode.auto_refreshing(1000))

sales_multi = local_adapter.read_table(table_identifier="sales.sales_multi", instructions=iceberg_instructions)

#################################################

# Manual refresh the table
iceberg_instructions = iceberg.IcebergInstructions(
        data_instructions=s3_instructions,
        update_mode=iceberg.IcebergUpdateMode.MANUAL_REFRESHING)

sales_multi = local_adapter.read_table(
    table_identifier="sales.sales_multi",
    snapshot_id=5120804857276751995,
    instructions=iceberg_instructions)

sales_multi.update(848129305390678414)
sales_multi.update(3019545135163225470)
sales_multi.update()

@lbooker42 lbooker42 added this to the 0.36.0 milestone Jul 2, 2024
@lbooker42 lbooker42 self-assigned this Jul 2, 2024
@lbooker42 lbooker42 requested a review from rcaudy July 2, 2024 15:47
/**
* Notify the listener of a {@link TableLocationKey} encountered while initiating or maintaining the location
* subscription. This should occur at most once per location, but the order of delivery is <i>not</i>
* guaranteed.
*
* @param tableLocationKey The new table location key
*/
void handleTableLocationKey(@NotNull ImmutableTableLocationKey tableLocationKey);
void handleTableLocationKeyAdded(@NotNull ImmutableTableLocationKey tableLocationKey);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good change, may be breaking for DHE, please consult Andy pre-merge.

void beginTransaction();

void endTransaction();

/**
* Notify the listener of a {@link TableLocationKey} encountered while initiating or maintaining the location
* subscription. This should occur at most once per location, but the order of delivery is <i>not</i>
Copy link
Member

@rcaudy rcaudy Jul 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider whether we can have add + remove + add. What about remove + add in the same pull?
Should document that this may change the "at most once per location" guarantee, and define semantics.
I think it should be something like:
We allow re-add of a removed TLK. Downstream consumers should process these in an order that respects delivery and transactionality.

Within one transaction, expect at most one of "remove" or "add" for a given TLK.
Within one transaction, we can allow remove followed by add, but not add followed by remove. This dictates that we deliver pending removes before pending adds in processPending.
That is, one transaction allows:

  1. Replace a TLK (remove followed by add)
  2. Remove a TLK (remove)
  3. Add a TLK (add)
    Double add, double remove, or add followed by remove is right out.

Processing an addition to a transaction.

  1. Remove: If there's an existing accumulated remove, error. Else, if there's an existing accumulated add, error. Else, accumulate the remove.
  2. Add: If there's an existing accumulated add, error. Else, accumulate the add.

Across multiple transactions delivered as a batch, ensure that the right end-state is achieved.

  1. Add + remove collapses pairwise to no-op
  2. Remove + add (assuming prior add) should be processed in order. We might very well choose to not allow re-add at this time, I don't expect Iceberg to do this. If we do allow it, we need to be conscious that the removed location's region(s) need(s) to be used for previous data, while the added one needs to be used for current data.
  3. Multiple adds or removes within without their opposite intervening is an error.

null token should be handled exactly the same as a single-element transaction.

Processing a transaction:

  1. Process removes first. If there's an add pending, then delete, swallow the remove. Else, if there's a remove pending, error. Else, store the remove as pending.
  2. Process adds. If there's an add pending, error. Else, store the add as pending.

Note: removal support means that RegionedColumnSources may no longer be immutable! We need to be sure that we are aware of whether a particular TLP might remove data, and ensure that in those cases the RCS is not marked immutable. REVISED: ONLY REPLACE IS AN ISSUE FOR IMMUTABILITY, AS LONG AS WE DON'T RESUSE SLOTS.

We discussed that TLPs should probably specify whether they are guaranteeing that they will never remove TLKs, and whether their TLs will never remove or modify rows. I think if and when we encounter data sources that require modify support, we should probably just use SourcePartitionedTable instead of PartitionAwareSourceTable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I need to handle the RCS immutability question in this PR since Iceberg will not modify rows.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing a region makes the values in the corresponding row key range disappear. That's OK for immutability.
If you allow a new region to use the same slot, or allow the old region to reincarnate in the same slot potentially with different data, you are violating immutability.

Not reusing slots means that a long-lived iceberg table may eventually exhaust its row key space.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace (remove + add of a TLK) requires some kind of versioning of the TL, in a way that the TLK is aware of in order to ensure that we provide the table with the right TL for the version. AbstractTableLocationProvider's location caching layer is not currently sufficient for atomically replacing TLs.

@pete-petey pete-petey modified the milestones: 0.36.0, 0.37.0 Aug 26, 2024
@rcaudy
Copy link
Member

rcaudy commented Sep 4, 2024

RegionedColumnSourceManager should increment reference count to any TableLocation is adds.
It should decrement reference count to any location it removes at the end of the cycle when it processed the removed (UpdateCommitter).
It should also decrement the reference count to any not-yet-removed location in destroy() (which it doesn't currently override, but should. Be sure to call super.destroy().

Copy link
Member

@rcaudy rcaudy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to get slightly more complicated; I was wrong, the TableLocation is not sufficient for reference counting. We instead need to register "ownership interest" by TableLocationKey.

We should introduce ReferenceCountedImmutableTableLocationKey, and use that as the type delivered to TableLocationProvider.Listeners.
AbstractTableLocationProvider should bifurcate its state internally into:

  1. "live" set of TLKs (RCITLKs). Live set is set of keys shown to static consumers and new listeners.
  2. "available" map of TLK -> Object (which may be the TL, or the TLK). Available map allows TLs to be accessed. Keys are superset of live set.

Incements:

  1. Ref count on RCITLK to be held by ATLP as long as the TLK is in the “live” set;
  2. Ref count bumped before delivery to any Listener, once per listener.

Decrements:

  1. Listeners responsible to decrement if OBE, for example add followed by remove in a subscription buffer.
  2. SourceTable responsible to decrement if filtered out.
  3. RCSM responsible to decrement upon processing remove at end of cycle, or in its own destroy().

Notes:

  1. ATLP must unwrap TLK before giving to makeTableLocation
  2. RCITLK.onReferenceCountAtZero removes the TLK (and any TL that exists) from the available map. If the TL existed, sends a null update, and clears column locations.
  3. It's not bad that the RCITLK hard refs the ATLP; we already ref it from the SourceTable.

Enterprise note:
RemoteTableDataService is a non-issue, since it’s only used with Deephaven format. Meaning, we don’t need to extend this across the wire. Andy may have to deal with that, or we may have to evolve the API, in some future use case.

Missing feature:
TLP needs to advertise it’s update model.
This might be:
Single-partition, add-only -> append-only table -> partition removal bad
Multi-partition, add-only -> add-only table -> partition removal bad
Multi-partition, partition removes possible -> no shifts or mods (still immutable) -> partition removal OK
Not exposing any other models at this time (e.g. partitions that can have mods, shifts, removes; if we want that, use partitioned tables).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants