Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Render API #224

Closed
wants to merge 11 commits into from
Closed

New Render API #224

wants to merge 11 commits into from

Conversation

NickHeiner
Copy link
Contributor

@NickHeiner NickHeiner commented Jul 31, 2023

Our current Render API has a few challenges.

  1. We need to list some constraints that may not be immediately obvious to people.
  2. Some people are less familiar with generator semantics.
  3. Is it weird to have a value that's both a promise and an async generator?
  4. AI.createRenderContext().render is odd because wtf is a render context?
  5. Other output methods (e.g. toTextStream, toReactStream, etc) are scattered.
  6. If you want deltas, you need to explicitly do the frame.slice(lastValue.length) pattern.

To address this, I started sketching a new API that wraps the old one.

Downsides of the new approach:

  1. With AI.createRenderContext().render, if you want to render within a component, it's more natural to use componentContext.render, rather than reimporting the top-level API. If I simplify that API, that natural barrier erodes.

I think this may be valuable as the top-level render API. For the "within components" rendering – maybe? If we move forward with this, I suspect we'd wrap the current top-level render API, rather than trying to use this for all render internals.

@vercel
Copy link

vercel bot commented Jul 31, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
ai-jsx-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 31, 2023 3:34pm
ai-jsx-nextjs-demo ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 31, 2023 3:34pm
ai-jsx-tutorial-nextjs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jul 31, 2023 3:34pm

Comment on lines 428 to 435
iteration++;
await sleep();
yield `first ${iteration} `
await sleep();
yield `second ${iteration} `
await sleep();
yield `third ${iteration}`
return AI.AppendOnlyStream;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because I was working on this on the plane with no wifi.

@@ -446,6 +470,7 @@ export async function* OpenAIChatModel(

let delta = await advance();
while (delta !== null) {
// @ts-ignore
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a hack to show the idea. If we move forward, I'll clean it up / move it somewhere more sensible.


// This all needs to be validate with genuine async.

const renderable = <App />
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section and below demonstrate how this API would be used.

@@ -1,13 +1,14 @@
{
"extends": "@tsconfig/node18/tsconfig.json",
"include": ["src/**/*", ".eslintrc.cjs"],
"include": ["src/simple-chat.tsx", ".eslintrc.cjs"],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just for local testing.

Copy link
Contributor

@petersalas petersalas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a mix of:

  1. Grouping what are effectively existing "entry-point" type APIs
  2. Tweaking of semantics for the existing render
  3. Facilitating consumption of the same render stream in multiple formats

But 1 and 2 feel like they can be done directly (e.g. refactoring and/or organizing imports, changing the existing implementation), and I'm not sure what the motivation for 3 is.

At a high level, I agree there are pain points here, but I'd rather solve them more directly than by adding another layer on top of the existing stuff, if that makes sense.

}
done = true;
resolveFinalResult!(lastFrame);
})();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All things equal I think it's preferable to let consumers pull from the stream rather than proactively flushing it to an EventEmitter

const finalResult = new Promise((resolve) => {
resolveFinalResult = resolve;
});
const eventEmitter = new EventEmitter();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With EventEmitter if there are no listeners I assume events are dropped, is that correct?

Comment on lines +138 to +142
await Promise.all([
streamToValues(rendered.treeStream()),
streamToValues(rendered.treeStream()),
streamToValues(rendered.deltaStream()),
])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the scenario where one wants to consume the same stream in multiple formats/multiple times? That feels like it should inform what the right semantics are, i.e. should they operate independently or in sync?


console.log();
console.log('=== Second Render ===');
// Pass a single map function, and it's applied to intermediate and the final results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could (should?) just be a change to the existing render?

streamToValues(rendered.deltaStream()),
])
);
// If you consume a stream after the render is complete, you just get one chunk with the final result.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could also just be a change to the existing render?

@NickHeiner
Copy link
Contributor Author

This seems like a mix of:

  1. Grouping what are effectively existing "entry-point" type APIs
  2. Tweaking of semantics for the existing render
  3. Facilitating consumption of the same render stream in multiple formats

But 1 and 2 feel like they can be done directly (e.g. refactoring and/or organizing imports, changing the existing implementation), and I'm not sure what the motivation for 3 is.

At a high level, I agree there are pain points here, but I'd rather solve them more directly than by adding another layer on top of the existing stuff, if that makes sense.

I'd be happy to address some of the issues by changing the existing render semantics. I did this separately in part to make it easier to demo, but also because I wasn't sure if changing the existing semantics would break some other cases. (Like, is there a reason that map only applies to intermediate results?)

For (3), I don't expect it to be a common case, but I also felt like it needed to be called out in the docs as footguns, so I would just as soon avoid it.

@NickHeiner NickHeiner closed this Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants