Skip to content

Commit

Permalink
Merge branch 'current' into 1.6_mvs_current_state
Browse files Browse the repository at this point in the history
  • Loading branch information
dataders authored Jul 20, 2023
2 parents 3cabbd2 + 40d8da0 commit 7aa3df9
Show file tree
Hide file tree
Showing 22 changed files with 669 additions and 21 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/autoupdate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ name: Auto Update

on:
# This will trigger on all pushes to all branches.
push: {}
# push: {}
# Alternatively, you can only trigger if commits are pushed to certain branches, e.g.:
# push:
# branches:
# - current
push:
branches:
- current
# - unstable
jobs:
autoupdate:
Expand Down
213 changes: 213 additions & 0 deletions website/blog/2023-07-17-GPT-and-dbt-test.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,213 @@
---
title: "Create dbt Documentation and Tests 10x faster with ChatGPT"
description: "You can use ChatGPT to infer the context of verbosely named fields from database table schemas."
slug: create-dbt-documentation-10x-faster-with-ChatGPT

authors: [pedro_brito_de_sa]

tags: [analytics craft, data ecosystem]
hide_table_of_contents: true

date: 2023-07-18
is_featured: true
---

Whether you are creating your pipelines into dbt for the first time or just adding a new model once in a while, **good documentation and testing should always be a priority** for you and your team. Why do we avoid it like the plague then? Because it’s a hassle having to write down each individual field, its description in layman terms and figure out what tests should be performed to ensure the data is fine and dandy. How can we make this process faster and less painful?

By now, everyone knows the wonders of the GPT models for code generation and pair programming so this shouldn’t come as a surprise. But **ChatGPT really shines** at inferring the context of verbosely named fields from database table schemas. So in this post I am going to help you 10x your documentation and testing speed by using ChatGPT to do most of the leg work for you.

<!--truncate-->

As a one-person Analytics team at [Sage](http://www.hellosage.com/) I had to create our dbt pipelines from the ground up. This meant 30+ tables of internal facts and dimensions + external data into a Staging Layer, plus all of the following layers of augmented models and Mart tables. After the fact, we are talking about 3500+ lines of YAML that I was NOT excited to get started on. Fortunately for me, this was February 2023 and ChatGPT had just come out. And boy, was I glad to have it. After a good dose of “prompt engineering” I managed to get most of my documentation and tests written out, only needing a few extra tweaks.

Writing this article as of July 2023, and now powered by GPT-4 and not GPT 3.5, it is already easier to get the same results I did, so here are my learnings that I hope everyone can replicate.

## Use verbose tables with verbose fields

ChatGPT can only infer so much, so tables with names and fields that resemble encryption keys are unlikely to be good for this approach. In this example we are going to use this table:

```sql
create or replace TRANSIENT TABLE STAGING.BASE.STG_STAFF_MEMBER (
ID NUMBER(38,0),
CREATEDATETIME TIMESTAMP_NTZ(9),
UPDATEDATETIME TIMESTAMP_NTZ(9),
VERSION NUMBER(38,0),
FIRSTNAME VARCHAR(16777216),
JOBTITLE VARCHAR(16777216),
LASTNAME VARCHAR(16777216),
MIDDLENAME VARCHAR(16777216),
ISCAREADMIN BOOLEAN,
ISARCHIVED BOOLEAN,
ADDRESSID VARCHAR(16777216),
ENTERPRISEID VARCHAR(16777216),
ISDELETED BOOLEAN
);

```

---

I copied this definition from Snowflake, and I would recommend that you always include both the name and type of the fields to get better results. This is especially helpful if you want to make sure ChatGPT correctly distinguishes between text, number, boolean or array fields, which may not be clear from the name alone.

## Start out with simple prompts

We all know how ChatGPT can digest very complex prompts, but as this is a tool for you to get the best results for your use case, I would recommend starting with simple, but specific, prompts.

Opening ChatGPT with GPT4, my first prompt is usually along these lines:

<Lightbox src="/img/blog/2023-07-17-GPT-and-dbt-test/image1.png" title="ChatGPT request for YAML file" />

And the output of this simple prompt is nothing short of amazing:

```yaml
version: 2

models:
- name: STAFF_MEMBER
description: This table contains information about the staff members.
columns:
- name: ID
description: The unique identifier for the staff member.
tests:
- unique
- not_null
- name: CREATEDATETIME
description: The timestamp when the record was created.
tests:
- not_null
- name: UPDATEDATETIME
description: The timestamp when the record was last updated.
- name: VERSION
description: Version number of the record.
- name: FIRSTNAME
description: The first name of the staff member.
tests:
- not_null
- name: JOBTITLE
description: The job title of the staff member.
tests:
- not_null
- name: LASTNAME
description: The last name of the staff member.
tests:
- not_null
- name: MIDDLENAME
description: The middle name of the staff member.
- name: ISCARADMIN
description: Boolean value indicating if the staff member is a care administrator.
tests:
- accepted_values:
values: ['true', 'false']
- name: ISARCHIVED
description: Boolean value indicating if the staff member record is archived.
tests:
- accepted_values:
values: ['true', 'false']
- name: COMMUNITYID
description: Identifier for the community of the staff member.
- name: ENTERPRISEID
description: Identifier for the enterprise of the staff member.
- name: ISDELETED
description: Boolean value indicating if the staff member record is deleted.
tests:
- accepted_values:
values: ['true', 'false']
```
Back in my day (5 months ago), ChatGPT with GPT 3.5 didn’t have much context on what a dbt-style YAML file was, so the result was a funky, non-dbt YAML format. If you are using GPT 3.5 and the results you get from the first prompt don’t compile in dbt, I would recommend an extra step I took which is to pass your understanding of what a dbt-style YAML file is. You can do this simply by copy-pasting a piece of a publicly available YAML file like [this one from the Jaffle Shop example](https://github.com/dbt-labs/jaffle_shop/blob/main/models/schema.yml), or a piece of YAML from your own code base.
A prompt for it would look something like:
<Lightbox src="/img/blog/2023-07-17-GPT-and-dbt-test/image2.png" title="ChatGPT request to memorize dbt-style YAML" />
## Specify details on generic tests in your prompts
dbt has four built-in generic tests: unique, not_null, accepted_values and relationships. My approach to adding details on these to the prompts is one of ‘more is better’, as in it’s faster to just delete tests that I don’t need in certain fields than having to add them manually.
ChatGPT will throw unique and not_null tests for mostly all columns that have ‘id’ in their name, and in my experience it adds them to a few more columns than it should, which is usually quick to fix.
Accepted_values and relationships are slightly trickier but the model can be adjusted to account for these in fairly simple ways. My tips are:
- accepted_values: add extra detail to the schema like an extra keyword like ‘CATEGORICAL’ after the data type
- relationships: bind the relationships to fields with specific names. For better results, pass on a list of valid table names from your schema that it can refer to.
One way of doing this is with a prompt like this:
<Lightbox src="/img/blog/2023-07-17-GPT-and-dbt-test/image3.png" title="ChatGPT request to write dbt-style YAML with generic tests" />
Which results in the following output:
```yaml
version: 2

models:
- name: STAFF_MEMBER
description: This table contains information about the staff members.
columns:
- name: ID
description: The unique identifier for the staff member.
tests:
- unique
- not_null
- name: CREATEDATETIME
description: The timestamp when the record was created.
tests:
- not_null
- name: UPDATEDATETIME
description: The timestamp when the record was last updated.
- name: VERSION
description: Version number of the record.
- name: FIRSTNAME
description: The first name of the staff member.
tests:
- not_null
- name: JOBTITLE
description: The job title of the staff member. This is a categorical field.
tests:
- not_null
- accepted_values:
values: ['Job Title 1', 'Job Title 2', 'Job Title 3'] # replace these with actual job titles
- name: LASTNAME
description: The last name of the staff member.
tests:
- not_null
- name: MIDDLENAME
description: The middle name of the staff member.
- name: ISCARADMIN
description: Boolean value indicating if the staff member is a care administrator.
tests:
- accepted_values:
values: ['true', 'false']
- name: ISARCHIVED
description: Boolean value indicating if the staff member record is archived.
tests:
- accepted_values:
values: ['true', 'false']
- name: COMMUNITYID
description: Identifier for the community of the staff member.
tests:
- relationships:
to: STAGING.COMMUNITY.ID # replace with actual reference table
field: ID
- name: ENTERPRISEID
description: Identifier for the enterprise of the staff member.
tests:
- relationships:
to: STAGING.ENTERPRISE.ID # replace with actual reference table
field: ID
- name: ISDELETED
description: Boolean value indicating if the staff member record is deleted.
tests:
- accepted_values:
values: ['true', 'false']
```
Even though it doesn’t magically guess the categories of your variables or your relationships, it is definitely helpful to have some placeholders in the right places.
As an add-on, giving the model a short description of the data models and the tables you are working with will help it fine tune your results.
## Wrap-Up
Creating documentation is still a very manual job, and this approach only works for one table at a time (maybe you can be the one leveraging the OpenAI API and creating a webapp that processes multiple tables at once?). However, ChatGPT can clearly cut a lot of time in these tasks.
I hope that these simple tips help you be more motivated and efficient in creating documentation and tests for your data models. And remember: verbosity in - verbosity out!
10 changes: 10 additions & 0 deletions website/blog/authors.yml
Original file line number Diff line number Diff line change
Expand Up @@ -373,6 +373,16 @@ pat_kearns:
name: Pat Kearns
organization: dbt Labs

pedro_brito_de_sa:
image_url: /img/blog/authors/pedro_brito.jpeg
job_title: Product Analyst
links:
- icon: fa-linkedin
url: https://www.linkedin.com/in/pbritosa/
name: Pedro Brito de Sa
organization: Sage


rastislav_zdechovan:
image_url: /img/blog/authors/rastislav-zdechovan.png
job_title: Analytics Engineer
Expand Down
File renamed without changes.
5 changes: 4 additions & 1 deletion website/docs/docs/build/about-metricflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,9 @@ There are a few key principles:

- MetricFlow, as a part of the dbt Semantic Layer, allows organizations to define company metrics logic through YAML abstractions, as described in the following sections.

- You can install MetricFlow via PyPI as an extension of your [dbt adapter](/docs/supported-data-platforms) in the CLI. To install the adapter, run `pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `pip install "dbt-metricflow[snowflake]"`.
- You can install MetricFlow using PyPI as an extension of your [dbt adapter](/docs/supported-data-platforms) in the CLI. To install the adapter, run `pip install "dbt-metricflow[your_adapter_name]"` and add the adapter name at the end of the command. For example, for a Snowflake adapter run `pip install "dbt-metricflow[snowflake]"`.

- To query metrics dimensions, dimension values, and validate your configurations; install the [MetricFlow CLI](/docs/build/metricflow-cli).

### Semantic graph

Expand All @@ -60,6 +62,7 @@ Metrics, which is a key concept, are functions that combine measures, constraint

MetricFlow supports different metric types:

- [Cumulative](/docs/build/cumulative) &mdash; Aggregates a measure over a given window.
- [Derived](/docs/build/derived) &mdash; An expression of other metrics, which allows you to do calculations on top of metrics.
- [Ratio](/docs/build/ratio) &mdash; Create a ratio out of two measures, like revenue per customer.
- [Simple](/docs/build/simple) &mdash; Metrics that refer directly to one measure.
Expand Down
2 changes: 1 addition & 1 deletion website/docs/docs/build/build-metrics-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ To fully experience the dbt Semantic Layer, including the ability to query dbt m
:::

Before you start, keep the following considerations in mind:
- Use the CLI to define metrics in YAML and query them using the [new metric specifications](https://github.com/dbt-labs/dbt-core/discussions/7456).
- Define metrics in YAML and query them using the [MetricFlow CLI](/docs/build/metricflow-cli).
- You must be on dbt Core v1.6 beta or higher to use MetricFlow. [Upgrade your dbt version](/docs/core/pip-install#change-dbt-core-versions) to get started.
* Note: Support for dbt Cloud and querying via external integrations coming soon.
- MetricFlow currently only supports Snowflake and Postgres.
Expand Down
12 changes: 9 additions & 3 deletions website/docs/docs/build/cumulative-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,12 @@ tags: [Metrics, Semantic Layer]

Cumulative metrics aggregate a measure over a given window. If no window is specified, the window is considered infinite and accumulates values over all time.

:::info MetricFlow time spine required

You will need to create the [time spine model](/docs/build/metricflow-time-spine) before you add cumulative metrics.

:::

```yaml
# Cumulative metrics aggregate a measure over a given window. The window is considered infinite if no window parameter is passed (accumulate the measure over all time)
metrics:
Expand All @@ -24,7 +30,7 @@ metrics:
### Window options
This section details examples for when you specify and don't specify window options.
This section details examples of when you specify and don't specify window options.
<Tabs>
Expand Down Expand Up @@ -56,7 +62,7 @@ metrics:
window: 7 days
```

From the sample yaml above, note the following:
From the sample YAML above, note the following:

* `type`: Specify cumulative to indicate the type of metric.
* `type_params`: Specify the measure you want to aggregate as a cumulative metric. You have the option of specifying a `window`, or a `grain to date`.
Expand Down Expand Up @@ -142,7 +148,7 @@ metrics:
```yaml
metrics:
name: revenue_monthly_grain_to_date #For this metric, we use a monthly grain to date
description: Monthly revenue using a grain to date of 1 month (think of this as a monthly resetting point)
description: Monthly revenue using grain to date of 1 month (think of this as a monthly resetting point)
type: cumulative
type_params:
measures:
Expand Down
2 changes: 2 additions & 0 deletions website/docs/docs/build/incremental-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ from raw_app_data.events
{% if is_incremental() %}

-- this filter will only be applied on an incremental run
-- (uses > to include records whose timestamp occurred since the last run of this model)
where event_time > (select max(event_time) from {{ this }})

{% endif %}
Expand Down Expand Up @@ -137,6 +138,7 @@ from raw_app_data.events
{% if is_incremental() %}

-- this filter will only be applied on an incremental run
-- (uses >= to include records arriving later on the same day as the last run of this model)
where date_day >= (select max(date_day) from {{ this }})

{% endif %}
Expand Down
27 changes: 24 additions & 3 deletions website/docs/docs/build/measures.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,27 @@ Measures are aggregations performed on columns in your model. They can be used a
| [`agg`](#aggregation) | dbt supports the following aggregations: `sum`, `max`, `min`, `count_distinct`, and `sum_boolean`. | Required |
| [`expr`](#expr) | You can either reference an existing column in the table or use a SQL expression to create or derive a new one. | Optional |
| [`non_additive_dimension`](#non-additive-dimensions) | Non-additive dimensions can be specified for measures that cannot be aggregated over certain dimensions, such as bank account balances, to avoid producing incorrect results. | Optional |
| [`agg_params`] | specific aggregation properties such as a percentile. | [Optional]|
| [`agg_time_dimension`] | The time field. Defaults to the default agg time dimension for the semantic model. | [Optional] |
| [`non_additive_dimension`] | Use these configs when you need non-additive dimensions. | [Optional]|
| [`label`] | How the metric appears in project docs and downstream integrations. | [Required]|


## Measure spec

An example of the complete YAML measures spec is below. The actual configuration of your measures will depend on the aggregation you're using.

```bash
measures:
- name: The name of the measure # think transaction_total. If `expr` isn't present then this is the expected name of the column [Required]
description: same as always [Optional]
agg: the aggregation type. #think average, sum, max, min, etc.[Required]
expr: the field # think transaction_total or some other name you might want to alias [Optional]
agg_params: specific aggregation properties such as a percentile [Optional]
agg_time_dimension: The time field. Defaults to the default agg time dimension for the semantic model. [Optional]
non_additive_dimension: Use these configs when you need non-additive dimensions. [Optional]
label: How the metric appears in project docs and downstream integrations. [Required]
```

### Name

Expand Down Expand Up @@ -62,7 +83,7 @@ If you use the `dayofweek` function in the `expr` parameter with the legacy Snow
```yaml
semantic_models:
- name: transactions
description: A record for every transaction that takes place. Carts are considered multiple transactions for each SKU.
description: A record of every transaction that takes place. Carts are considered multiple transactions for each SKU.
model: ref('schema.transactions')
defaults:
agg_time_dimensions:
Expand Down Expand Up @@ -190,14 +211,14 @@ semantic_models:
name: metric_time
window_choice: min
- name: mrr_end_of_month
description: Aggregate by summing all users active subscription plans at end of month
description: Aggregate by summing all users' active subscription plans at the end of month
expr: subscription_value
agg: sum
non_additive_dimension:
name: metric_time
window_choice: max
- name: mrr_by_user_end_of_month
description: Group by user_id to achieve each users MRR at the end of the month
description: Group by user_id to achieve each user's MRR at the end of the month
expr: subscription_value
agg: sum
non_additive_dimension:
Expand Down
Loading

0 comments on commit 7aa3df9

Please sign in to comment.