Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hydrate MDX #220

Merged
merged 18 commits into from
Aug 1, 2023
Merged

Hydrate MDX #220

merged 18 commits into from
Aug 1, 2023

Conversation

NickHeiner
Copy link
Contributor

@NickHeiner NickHeiner commented Jul 28, 2023

image

With this approach, we compile MDX on the client.

Pros:

  • It's easy to get the components (e.g. Card) in scope – no need to pass them up and down to the server.
  • Easy for the client to otherwise customize the Markdown rendering as they would like.
  • Maintains support for the text streaming model, allowing us to stay closer to the Vercel useChat paved path.
  • Enables other AI.JSX components to render UI simply by emitting MDX. (Of course, we need to ensure those emitted components are in scope at compile time.)
  • I suspect that the MDX compiler/runtime may be published in a way that causes trouble for CJS importers. By keeping the MDX usage to "user-land", or at least on the client, if there is a problem here, we minimize the impact.

Potential objections

  • Performance of doing O(count of stream chunks) compiles – I don't think this will be noticeable, particularly in context of LLM response times.

Future work

  • Devising a scheme for the user to interact with the components – e.g. if the user clicks a button or fills out a form, how do we communicate that to the AI?
  • AI does a decent but not amazing job of adhering to the spec – for instance, all my few-shot examples say Badge needs a color prop (e.g. <Badge color="yellow">In progress</Badge>), but the model does not always do that. There may be more prompt engineering work we can do here.
  • The demo itself in the nextjs project isn't super compelling. I would rather focus that demo energy on HS.
  • Sometimes the model still emits ```mdx blocks wrapping large parts of its response.
  • Sometimes the model uses <details> and <summary> in a way that causes the compiler to produce <details><p><summary>, which is invalid.

I think this approach will work reasonable well for RSC / Architecture 4, with some modification.

Once we align on this approach, I'll add docs.

@vercel
Copy link

vercel bot commented Jul 28, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
ai-jsx-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 1, 2023 2:17pm
ai-jsx-nextjs-demo ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 1, 2023 2:17pm
ai-jsx-tutorial-nextjs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Aug 1, 2023 2:17pm

@NickHeiner NickHeiner changed the title Add a prop so MdxChatCompletion only emits valid output Hydrate output of MdxChatCompletion Jul 29, 2023
@NickHeiner NickHeiner changed the title Hydrate output of MdxChatCompletion [WIP] Hydrate output of MdxChatCompletion Jul 29, 2023
@NickHeiner NickHeiner changed the title [WIP] Hydrate output of MdxChatCompletion Hydrate output of MdxChatCompletion Jul 31, 2023
@NickHeiner NickHeiner changed the title Hydrate output of MdxChatCompletion Hydrate MDX Jul 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant