diff --git a/_freeze/materials/d1-9002-async/codealong-1/execute-results/html.json b/_freeze/materials/d1-9002-async/codealong-1/execute-results/html.json
new file mode 100644
index 0000000..a99eb2e
--- /dev/null
+++ b/_freeze/materials/d1-9002-async/codealong-1/execute-results/html.json
@@ -0,0 +1,14 @@
+{
+ "hash": "c12dcd124a82cea6178e9a18ef6d6d65",
+ "result": {
+ "markdown": "---\ntitle: Asynchronous Processing of LEGO Model Prediction\nformat:\n html:\n code-line-numbers: false\n execute:\n echo: true\n eval: false\n---\n\n\n## Requirements\n\nThe current version of our Shiny application contains a module for generating predictions of the number of LEGO parts in a set using the number of unique colors and number of unique part categories. The API is executed and processed using the [`{httr2}`](https://httr2.r-lib.org/) package. Here is the function wrapping the API execution:\n\n\n::: {.cell}\n\n```{.r .cell-code}\n#' @importFrom httr2 request req_body_json req_perform resp_body_json\nrun_prediction <- function(df, endpoint_url, back_transform = TRUE, round_result = TRUE) {\n # create request object\n req <- request(endpoint_url)\n\n # perform request\n resp <- req |>\n req_body_json(df) |>\n req_perform()\n\n # extract predictions from response\n pred_values <- resp_body_json(resp)$.pred |> unlist()\n\n # back-transform log10 value of predicted number of parts if requested\n if (back_transform) {\n pred_values <- 10 ^ pred_values\n }\n\n # round result up to nearest integer if requested\n if (round_result) pred_values <- ceiling(pred_values)\n\n # append predictions to supplied data frame\n dplyr::mutate(df, predicted_num_parts = pred_values)\n}\n```\n:::\n\n\nUnfortunately, the prediction API call takes a bit of time to execute due to some **extremely sophisticated processing** 😅. As a result, any interactions within the application will not be processed until the prediction call completes. Our goal is to convert the prediction processing from *synchronous* to *asynchronous* using `{crew}`\n\n## Plan\n\n1. Establish reactive values for tracking the status of the prediction calls\n1. Create a new controller to launch new R processes when new prediction tasks are launched\n1. Modify the existing `observeEvent` to push the prediction task to the controller, ensuring the key objects and required packages are passed on to the controller.\n1. Create a poll that's invalidated every 100 milliseconds to query the status of the submitted tasks in the controller and update the prediction result reactive value when complete.\n\n## Solution \n\nFirst we create the following `reactiveVal` objects to keep track of the prediction state:\n\n\n::: {.cell}\n\n```{.r .cell-code}\npred_status <- reactiveVal(\"No prediction submitted yet.\")\npred_poll <- reactiveVal(FALSE)\n```\n:::\n\n\nNext we set up a new controller:\n\n\n::: {.cell}\n\n```{.r .cell-code}\n# establish async processing with crew\ncontroller <- crew_controller_local(workers = 4, seconds_idle = 10)\ncontroller$start()\n\n# make sure to terminate the controller on stop #NEW\nonStop(function() controller$terminate())\n```\n:::\n\n\nInside the `observeEvent` for the user clicking the prediction button, we update the logic to push the prediction task to the controller:\n\n\n::: {.cell}\n\n```{.r .cell-code}\ncontroller$push(\n command = run_prediction(df),\n data = list(\n run_prediction = run_prediction,\n df = pred_data_rv$data\n ),\n packages = c(\"httr2\", \"dplyr\")\n)\n\npred_poll(TRUE)\n```\n:::\n\n\nLastly, we create a new `observe` block that periodically checks whether the running `{crew}` tasks have completed, ensuring that this is only executed when a prediction has been launched:\n\n\n::: {.cell}\n\n```{.r .cell-code}\nobserve({\n req(pred_poll())\n\n invalidateLater(millis = 100)\n result <- controller$pop()$result\n\n if (!is.null(result)) {\n pred_data_rv$data <- result[[1]]\n print(controller$summary()) \n }\n\n if (isFALSE(controller$nonempty())) {\n pred_status(\"Prediction Complete\")\n pred_poll(controller$nonempty())\n removeNotification(id = \"pred_message\")\n }\n})\n```\n:::\n",
+ "supporting": [],
+ "filters": [
+ "rmarkdown/pagebreak.lua"
+ ],
+ "includes": {},
+ "engineDependencies": {},
+ "preserve": {},
+ "postProcess": true
+ }
+}
\ No newline at end of file
diff --git a/materials/d1-9002-async/assets/img/lego_line_pay.jpg b/materials/d1-9002-async/assets/img/lego_line_pay.jpg
new file mode 100644
index 0000000..2f9df28
Binary files /dev/null and b/materials/d1-9002-async/assets/img/lego_line_pay.jpg differ
diff --git a/materials/d1-9002-async/assets/img/shiny.png b/materials/d1-9002-async/assets/img/shiny.png
new file mode 100644
index 0000000..281845f
Binary files /dev/null and b/materials/d1-9002-async/assets/img/shiny.png differ
diff --git a/materials/d1-9002-async/codealong-1.qmd b/materials/d1-9002-async/codealong-1.qmd
new file mode 100644
index 0000000..f91c260
--- /dev/null
+++ b/materials/d1-9002-async/codealong-1.qmd
@@ -0,0 +1,107 @@
+---
+title: Asynchronous Processing of LEGO Model Prediction
+format:
+ html:
+ code-line-numbers: false
+ execute:
+ echo: true
+ eval: false
+---
+
+## Requirements
+
+The current version of our Shiny application contains a module for generating predictions of the number of LEGO parts in a set using the number of unique colors and number of unique part categories. The API is executed and processed using the [`{httr2}`](https://httr2.r-lib.org/) package. Here is the function wrapping the API execution:
+
+```{r}
+#' @importFrom httr2 request req_body_json req_perform resp_body_json
+run_prediction <- function(df, endpoint_url, back_transform = TRUE, round_result = TRUE) {
+ # create request object
+ req <- request(endpoint_url)
+
+ # perform request
+ resp <- req |>
+ req_body_json(df) |>
+ req_perform()
+
+ # extract predictions from response
+ pred_values <- resp_body_json(resp)$.pred |> unlist()
+
+ # back-transform log10 value of predicted number of parts if requested
+ if (back_transform) {
+ pred_values <- 10 ^ pred_values
+ }
+
+ # round result up to nearest integer if requested
+ if (round_result) pred_values <- ceiling(pred_values)
+
+ # append predictions to supplied data frame
+ dplyr::mutate(df, predicted_num_parts = pred_values)
+}
+```
+
+Unfortunately, the prediction API call takes a bit of time to execute due to some **extremely sophisticated processing** 😅. As a result, any interactions within the application will not be processed until the prediction call completes. Our goal is to convert the prediction processing from *synchronous* to *asynchronous* using `{crew}`
+
+## Plan
+
+1. Establish reactive values for tracking the status of the prediction calls
+1. Create a new controller to launch new R processes when new prediction tasks are launched
+1. Modify the existing `observeEvent` to push the prediction task to the controller, ensuring the key objects and required packages are passed on to the controller.
+1. Create a poll that's invalidated every 100 milliseconds to query the status of the submitted tasks in the controller and update the prediction result reactive value when complete.
+
+## Solution
+
+First we create the following `reactiveVal` objects to keep track of the prediction state:
+
+```{r}
+pred_status <- reactiveVal("No prediction submitted yet.")
+pred_poll <- reactiveVal(FALSE)
+```
+
+Next we set up a new controller:
+
+```{r}
+# establish async processing with crew
+controller <- crew_controller_local(workers = 4, seconds_idle = 10)
+controller$start()
+
+# make sure to terminate the controller on stop #NEW
+onStop(function() controller$terminate())
+```
+
+Inside the `observeEvent` for the user clicking the prediction button, we update the logic to push the prediction task to the controller:
+
+```{r}
+controller$push(
+ command = run_prediction(df),
+ data = list(
+ run_prediction = run_prediction,
+ df = pred_data_rv$data
+ ),
+ packages = c("httr2", "dplyr")
+)
+
+pred_poll(TRUE)
+```
+
+Lastly, we create a new `observe` block that periodically checks whether the running `{crew}` tasks have completed, ensuring that this is only executed when a prediction has been launched:
+
+```{r}
+observe({
+ req(pred_poll())
+
+ invalidateLater(millis = 100)
+ result <- controller$pop()$result
+
+ if (!is.null(result)) {
+ pred_data_rv$data <- result[[1]]
+ print(controller$summary())
+ }
+
+ if (isFALSE(controller$nonempty())) {
+ pred_status("Prediction Complete")
+ pred_poll(controller$nonempty())
+ removeNotification(id = "pred_message")
+ }
+})
+```
+
diff --git a/materials/d1-9002-async/index.qmd b/materials/d1-9002-async/index.qmd
new file mode 100644
index 0000000..4a86079
--- /dev/null
+++ b/materials/d1-9002-async/index.qmd
@@ -0,0 +1,95 @@
+---
+title: "Asynchronous Processing"
+subtitle: "posit::conf(2023)
Shiny in Production: Tools & Techniques"
+author: "TBD"
+footer: "[{{< var workshop_short_url >}}]({{< var workshop_full_url >}})"
+format:
+ revealjs:
+ theme: [default, ../slides.scss] # moon= teal bg | dark
+ scrollable: true
+ incremental: false
+ slide-number: c/t # c/t | c | h/v | h.v
+ slide-tone: false #true
+ code-line-numbers: true
+ history: false
+revealjs-plugins:
+ - codewindow
+---
+
+## Single (threaded) Line {background-image="assets/img/lego_line_pay.jpg" background-size="cover"}
+
+* A single R process managing the different tasks in a Shiny application
+* Executed one-by-one
+
+# Should I care?
+
+## It Depends ...
+
+If you are the __only__ user for a quick and efficient app: Likely not
+
+::: {.notes}
+TODO: Find a way to center the sentence vertically in the slide
+:::
+
+## Crowd Pleaser
+
+Multiple users accessing the app __concurrently__:
+
+* Single-threaded R process serving multiple users in typical deployments
+
+## Asynchronous Processing (circa 2018)
+
+:::: {.columns}
+
+::: {.column width="50%"}
+
+### 📦 [`{promises}`](https://rstudio.github.io/promises/index.html)
+
+Handle objects representing the (eventual) result of an async operation
+
+:::
+
+::: {.column width="50%"}
+
+### 📦 [`{future}`](https://future.futureverse.org/)
+
+Launch tasks without blocking current R session
+
+:::
+
+::::
+
+::: footer
+[Using promises with Shiny](https://rstudio.github.io/promises/articles/promises_06_shiny.html)
+:::
+
+## Introducing [`{crew}`](https://wlandau.github.io/crew/)
+
+> A distributed worker launcher for asynchronous tasks
+
+* Extends use of the [mirai](https://github.com/shikokuchuo/mirai) task scheduler to multiple computing backends
+* Central controller object manages tasks (scales on fly)
+* Supports multiple [controller groups](https://wlandau.github.io/crew/articles/controller_groups.html) for specialized worker types
+* Fits nicely with [`{targets}`](https://docs.ropensci.org/targets/) and ...
+
+. . .
+
+![](assets/img/shiny.png){.absolute top=0 left=200}
+
+### Watch-Along {background-color="#17395c"}
+
+Using `{crew}` inside a Shiny application:
+
+* Vignette:
+* Application:
+
+## Setting up for Success
+
+1. Create functions for long-running tasks
+1. Create multiple [`reactiveVal`](https://shiny.posit.co/r/reference/shiny/latest/reactiveval) objects for bookkeeping
+1. Set up a `{crew}` controller
+1. Establish an event-driven push of task to the controller with monitoring of worker status
+
+# Code-Along {background-color="#17395c"}
+
+[Code-Along 1](codealong-1.html){target="_blank"}: Asynchronous calls of a web API
\ No newline at end of file
diff --git a/units/d1-9002-async.qmd b/units/d1-9002-async.qmd
new file mode 100644
index 0000000..19e03e3
--- /dev/null
+++ b/units/d1-9002-async.qmd
@@ -0,0 +1,32 @@
+---
+title: "Async Processing"
+subtitle: "TBD"
+author: "Eric Nantz & Michael Thomas"
+date: "2023-09-18"
+listing:
+ - id: code-along
+ contents:
+ - ../materials/d1-9002-async/codealong-*.qmd
+ type: table
+ fields: [subtitle, title]
+ field-display-names:
+ subtitle: "Code-Along"
+ sort: [filename]
+ sort-ui: false
+ filter-ui: false
+ image-placeholder: assets/img/placeholder.png
+tbl-colwidths: [5,20,75]
+---
+
+## Slides
+
+::: callout-warning
+These slides are under construction and will be finalized prior to the workshop date.
+:::
+
+[View slides in full screen](../materials/d1-9002-async/index.html)
+
+```{=html}
+
+```
+