Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Render API #224

Closed
wants to merge 11 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 16 additions & 1 deletion packages/ai-jsx/src/core/render.ts
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ export interface Context<T> {
[contextKey]: { defaultValue: T; userContextSymbol: symbol };
}

interface RenderOpts<TIntermediate = string, TFinal = string> {
export interface RenderOpts<TIntermediate = string, TFinal = string> {
/**
* Instructs rendering to stop rendering on certain elements. When specified,
* rendering will return an array of strings and `Element`s rather than a
Expand Down Expand Up @@ -384,6 +384,21 @@ export function createRenderContext(opts?: { logger?: LogImplementation }) {
});
}

// class RenderObserver {
// constructor(renderResult: )
// }

async function render(renderable: Renderable, opts?: {
logger?: LogImplementation,
} & Pick<RenderOpts, 'appendOnly'>) {
const renderResult = createRenderContext({logger: opts?.logger}).render(renderable, {appendOnly: opts?.appendOnly});

let lastFrame = '';
for await (const frame of renderResult) {
lastFrame = frame;
}
}

function createRenderContextInternal(renderStream: StreamRenderer, userContext: Record<symbol, any>): RenderContext {
const context: RenderContext = {
render: <TFinal extends string | PartiallyRendered[], TIntermediate>(
Expand Down
25 changes: 25 additions & 0 deletions packages/ai-jsx/src/lib/openai.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,8 @@ async function checkOpenAIResponse<M extends OpenAIMethod>(response: Response, l
}
}

let iteration = 0;

/**
* Represents an OpenAI text completion model (e.g., `text-davinci-003`).
*/
Expand Down Expand Up @@ -410,6 +412,28 @@ export async function* OpenAIChatModel(
stream: true,
};

function sleep() {
// return new Promise(resolve => setTimeout(resolve, 500));

// return new Promise<void>((resolve) => {
// console.log('timeout start');
// setTimeout(() => {
// console.log('timeout done');
// resolve();
// }, 1000);
// })
NickHeiner marked this conversation as resolved.
Show resolved Hide resolved

return Promise.resolve();
}
iteration++;
await sleep();
yield `first ${iteration} `
await sleep();
yield `second ${iteration} `
await sleep();
yield `third ${iteration}`
return AI.AppendOnlyStream;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is because I was working on this on the plane with no wifi.


logger.debug({ chatCompletionRequest }, 'Calling createChatCompletion');
const chatResponse = await openai.createChatCompletion(chatCompletionRequest);

Expand Down Expand Up @@ -446,6 +470,7 @@ export async function* OpenAIChatModel(

let delta = await advance();
while (delta !== null) {
// @ts-ignore
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be removed.

if (delta.role === 'assistant') {
// Memoize the stream to ensure it renders only once.
const assistantStream = memo(
Expand Down
53 changes: 50 additions & 3 deletions packages/ai-jsx/src/react/jit-ui/mdx.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,35 @@ import * as AI from '../core.js';
import { ChatCompletion, SystemMessage } from '../../core/completion.js';
import React from 'react';
import { collectComponents } from '../completion.js';
import { compile } from '@mdx-js/mdx';

/**
* Use GPT-4 with this.
* A completion component that emits [MDX](https://mdxjs.com/).
*
* By default, the result streamed out of this component will sometimes be unparsable, as the model emits a partial value.
* (For instance, if the model is emitting the string `foo <Bar />`, and
* it streams out `foo <Ba`, that's not parsable.)
*
* To ensure that the result is always parsable, pass the prop `alwaysParsable`. This will buffer up intermediate streaming results until the result accumulated so far is parsable.
*
* You'll get better results with this if you use GPT-4.
*
* Use `usageExamples` to teach the model how to use your components.
*
* @see https://docs.ai-jsx.com/guides/mdx
* @see https://github.com/fixie-ai/ai-jsx/blob/main/packages/examples/src/mdx.tsx
*/
export function MdxChatCompletion({ children, usageExamples }: { children: AI.Node; usageExamples: React.ReactNode }) {
export async function* MdxChatCompletion(
{
children,
usageExamples,
alwaysParsable,
}: { children: AI.Node; usageExamples: React.ReactNode; alwaysParsable?: boolean },
{ render, logger }: AI.ComponentContext
) {
const components = collectComponents(usageExamples);
/* prettier-ignore */
return <ChatCompletion>
const completion = <ChatCompletion>
<SystemMessage>
You are an assistant who can use React components to work with the user. By default, you use markdown. However, if it's useful, you can also mix in the following React components: {Object.keys(components).join(', ')}.
All your responses
Expand Down Expand Up @@ -112,4 +133,30 @@ export function MdxChatCompletion({ children, usageExamples }: { children: AI.No
</SystemMessage>
{children}
</ChatCompletion>;

if (!alwaysParsable) {
return completion;
}

const renderedCompletion = render(completion, { appendOnly: true });
yield AI.AppendOnlyStream;

let lastParsablerame = '';

for await (const frame of renderedCompletion) {
const delta = frame.slice(lastParsablerame.length);
try {
await compile(frame);
logger.trace({ frame }, 'Yielding parsable frame');
lastParsablerame = frame;
yield delta;
} catch {
// Make sure we only yield parsable frames.
logger.trace({ frame }, 'Not yielding unparsable frame');
}
}
const finalResult = await renderedCompletion;
yield finalResult.slice(lastParsablerame.length);
// Assume the last frame is parsable.
return AI.AppendOnlyStream;
}
185 changes: 183 additions & 2 deletions packages/examples/src/simple-chat.tsx
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just a hack to show the idea. If we move forward, I'll clean it up / move it somewhere more sensible.

Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import { ChatCompletion, SystemMessage, UserMessage } from 'ai-jsx/core/completion';
import { showInspector } from 'ai-jsx/core/inspector';
// import { showInspector } from 'ai-jsx/core/inspector';
import * as AI from 'ai-jsx';

function App() {
return (
Expand All @@ -10,4 +11,184 @@ function App() {
);
}

showInspector(<App />);
// showInspector(<App />);

async function* renderableMock() {
yield 'first '
yield 'first second '
yield 'first second third'
}

// class RenderObserver {
// constructor()
// }

/* eslint-disable no-undef */

// function sleep() {
// return new Promise(resolve => setTimeout(resolve, 500));
// }
// console.log('pre sleep')
// await sleep();
// console.log('post sleep')

function render(renderable: any, opts?: Pick<AI.RenderOpts, 'map'>) {
const mapFn = opts?.map ? opts.map : (x: string) => x;

function makeRenderAdapter(renderResult: AI.RenderResult<string, string>) {
let lastFrame = mapFn('');
let done = false;
let resolveNextFrameAvailable: (value?: unknown) => void;
let nextFrameAvailable = new Promise(resolve => {
resolveNextFrameAvailable = resolve;
});

let resolveFinalResult;
let finalResult = new Promise(resolve => {
resolveFinalResult = resolve
});

(async () => {
let loopIteration = 0;
for await (const frame of renderResult) {
// console.log({frame, loopIteration: loopIteration++});
lastFrame = mapFn(frame);
resolveNextFrameAvailable!();

nextFrameAvailable = new Promise(resolve => {
resolveNextFrameAvailable = resolve;
})
}
done = true;
resolveFinalResult!(lastFrame);
})()

return {
getLastFrame: () => lastFrame,
getDone: () => done,
getResolveNextFrameAvailable: () => resolveNextFrameAvailable,
finalResult
}
}

const renderContext = AI.createRenderContext(/* take logger */);
const memoized = renderContext.memo(renderable);
const treeStreamRender = AI.createRenderContext().render(memoized);
const appendStreamRender = AI.createRenderContext().render(memoized, {appendOnly: true});

const treeStreamRenderAdapter = makeRenderAdapter(treeStreamRender);
const appendStreamRenderAdapter = makeRenderAdapter(appendStreamRender);

function makeFrameStream(renderAdapter: ReturnType<typeof makeRenderAdapter>) {
// let lastEmitted: string | undefined;
return new ReadableStream({
async pull(controller) {
// console.log('pull', lastEmitted, lastFrame)

// When I do this, the process exits with code 13
// if (lastEmitted !== lastFrame) {
// controller.enqueue(lastFrame);
// lastEmitted = lastFrame;
// }

// If I do this, we see a bunch of intermediate frames.
controller.enqueue(renderAdapter.getLastFrame());

if (renderAdapter.getDone()) {
controller.close();
}
await renderAdapter.getResolveNextFrameAvailable();
}
})
}

/**
* This only works when the stream being piped into it is appendOnly.
* Can I memoize a renderable and then render it both appendOnly
* and tree-streamed?
*/
function makeDeltaTransformer() {
let lastEmittedValue = '';
return new TransformStream({
transform(latestFrame, controller) {
const delta = latestFrame.slice(lastEmittedValue.length)
lastEmittedValue = latestFrame;
controller.enqueue(delta);
}
})
}

function makeDeltaStream() {
return makeFrameStream(appendStreamRenderAdapter).pipeThrough(makeDeltaTransformer());
}

return {
treeStream: () => makeFrameStream(treeStreamRenderAdapter),
appendStream: () => makeFrameStream(appendStreamRenderAdapter),
deltaStream: makeDeltaStream,
// We could pull the final result from either adapter – they're equivalent.
result: appendStreamRenderAdapter.finalResult,

writeToStdout: async () => {
const reader = makeDeltaStream().getReader();
// eslint-disable-next-line no-constant-condition
while (true) {
const { done, value } = await reader.read();
if (done) {
break;
}
process.stdout.write(value);
}
}
}
}

async function streamToValues(stream: ReadableStream) {
const values: any[] = [];

await stream.pipeTo(new WritableStream({
write(chunk) {
values.push(chunk);
}
}))

return values;
}

// This all needs to be validate with genuine async.

const renderable = <App />
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section and below demonstrate how this API would be used.

const rendered = render(renderable);
console.log('=== First Render ===')
// You can consume a stream multiple times.
console.log(await Promise.all([
streamToValues(rendered.treeStream()),
streamToValues(rendered.treeStream()),
streamToValues(rendered.deltaStream()),
]))
// If you consume a stream after the render is complete, you just get one chunk with the final result.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could also just be a change to the existing render?

console.log(await streamToValues(rendered.treeStream()))
console.log('Deltas', await streamToValues(rendered.deltaStream()))
console.log('Final result:', await rendered.result)

console.log();
console.log('=== Second Render ===')
// Pass a single map function, and it's applied to intermediate and the final results.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could (should?) just be a change to the existing render?

const mappedRender = render(renderable, {
map: frame => `frame prefix: ${frame}`
})
console.log('Tree stream', await streamToValues(mappedRender.treeStream()))
console.log('Append stream', await streamToValues(mappedRender.appendStream()))
console.log('Deltas', await streamToValues(mappedRender.deltaStream()))
console.log('Final result:', await mappedRender.result)

console.log();
console.log('=== Third Render ===')
const renderForAppendStream = render(renderable);
console.log(await streamToValues(renderForAppendStream.treeStream()));
console.log(await streamToValues(renderForAppendStream.appendStream()));

console.log();
console.log('=== Fourth Render: writing to stdout ===')
await render(renderable).writeToStdout();
console.log('\nDone writing to stdout')
3 changes: 2 additions & 1 deletion packages/examples/tsconfig.json
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
{
"extends": "@tsconfig/node18/tsconfig.json",
"include": ["src/**/*", ".eslintrc.cjs"],
"include": ["src/simple-chat.tsx", ".eslintrc.cjs"],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just for local testing.

"compilerOptions": {
"jsx": "react-jsx",
"jsxImportSource": "ai-jsx",
"moduleResolution": "node16",
"module": "esnext",
"resolveJsonModule": true,
"outDir": "dist",
"lib": ["DOM", "DOM.Iterable"],
"declaration": true
}
}