diff --git a/.demo/stepOne.png b/.demo/stepOne.png
new file mode 100644
index 0000000..4137f8b
Binary files /dev/null and b/.demo/stepOne.png differ
diff --git a/.demo/stepThree.png b/.demo/stepThree.png
new file mode 100644
index 0000000..30229da
Binary files /dev/null and b/.demo/stepThree.png differ
diff --git a/.demo/stepTwo.png b/.demo/stepTwo.png
new file mode 100644
index 0000000..7b8b874
Binary files /dev/null and b/.demo/stepTwo.png differ
diff --git a/README.md b/README.md
index b448900..5287a66 100644
--- a/README.md
+++ b/README.md
@@ -19,22 +19,18 @@ Since Polars leverages Rust speedups, you need to have Rust installed in your en
## Usage
-```
-# Instantiate a connection to BigQuery
-from klondike import BigQueryConnector
-
-bq = BigQueryConnector(
- app_creds="/path/to/your/service_account.json"
-)
-
-# Read data from BigQuery
-sql = "SELECT * FROM nba_dbt.staging__nyk_players"
-df = bq.read_dataframe_from_bigquery(sql=sql)
-
-# Write data to BigQuery
-bq.write_dataframe_to_bigquery(
- df=df,
- table_name="nba_dbt.my_new_table",
- if_eixsts="truncate"
-)
-```
\ No newline at end of file
+In this demo we'll connect to BigQuery, read data, transform it, and write it back to the data warehouse.
+
+First, connect to the BigQuery warehouse by supplying the `BigQueryConnector()` object with the relative path to your service account credentials.
+
+Next, supply the object with a SQL query in the `read_dataframe_from_bigquery()` function to redner a `DataFrame` object:
+
+
+
+Now that your data is pulled into a local instance, you can clean and transform it using standard Polars functionality - [see the docs](https://docs.pola.rs/py-polars/html/reference/dataframe/index.html) for more information.
+
+
+
+Finally, push your transformed data back to the BigQuery warehouse using the `write_dataframe_to_bigquery()` function:
+
+
\ No newline at end of file